modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-29 18:27:06
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
526 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-29 18:26:56
card
stringlengths
11
1.01M
Rustem/roberta-base-trained-50k-docs
Rustem
2022-03-16T12:38:46Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-16T12:33:53Z
--- license: apache-2.0 ---
RobertoMCA97/xlm-roberta-base-finetuned-panx-de-fr
RobertoMCA97
2022-03-16T12:24:41Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-16T12:03:40Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1667 - F1: 0.8582 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 | | 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 | | 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
RobertoMCA97/xlm-roberta-base-finetuned-panx-de
RobertoMCA97
2022-03-16T11:55:06Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-15T11:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8590909090909091 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1380 - F1: 0.8591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2642 | 1.0 | 525 | 0.1624 | 0.8251 | | 0.1315 | 2.0 | 1050 | 0.1445 | 0.8508 | | 0.0832 | 3.0 | 1575 | 0.1380 | 0.8591 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
anton-l/xtreme_s_xlsr_minds14_upd
anton-l
2022-03-16T11:52:27Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "minds14", "google/xtreme_s", "generated_from_trainer", "dataset:xtreme_s", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-16T11:48:51Z
--- license: apache-2.0 tags: - minds14 - google/xtreme_s - generated_from_trainer datasets: - xtreme_s metrics: - f1 - accuracy model-index: - name: xtreme_s_xlsr_minds14_upd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtreme_s_xlsr_minds14_upd This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.FR-FR dataset. It achieves the following results on the evaluation set: - Loss: 2.6303 - F1: 0.0223 - Accuracy: 0.0833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4.dev0 - Tokenizers 0.11.6
ixa-ehu/roberta-eus-mc4-base-cased
ixa-ehu
2022-03-16T11:49:27Z
5
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "basque", "eu", "arxiv:2203.08111", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-16T09:56:03Z
--- language: eu license: cc-by-nc-4.0 tags: - basque - roberta --- # Roberta-eus mc4 base cased This is a RoBERTa model for Basque model presented in [Does corpus quality really matter for low-resource languages?](https://arxiv.org/abs/2203.08111). There are several models for Basque using the RoBERTa architecture, using different corpora: - roberta-eus-euscrawl-base-cased: Basque RoBERTa model trained on Euscrawl, a corpus created using tailored crawling from Basque sites. EusCrawl contains 12,528k documents and 423M tokens. - roberta-eus-euscrawl-large-cased: RoBERTa large trained on EusCrawl. - roberta-eus-mC4-base-cased: Basque RoBERTa model trained on the Basque portion of mc4 dataset. - roberta-eus-CC100-base-cased: Basque RoBERTa model trained on Basque portion of cc100 dataset. The models have been tested on five different downstream tasks for Basque: Topic classification, Sentiment analysis, Stance detection, Named Entity Recognition (NER), and Question Answering (refer to the [paper](https://arxiv.org/abs/2203.08111) for more details). See summary of results below: | Model | Topic class. | Sentiment | Stance det. | NER | QA | Average | |----------------------------------|--------------|-----------|-------------|----------|----------|----------| | roberta-eus-euscrawl-base-cased | 76.2 | 77.7 | 57.4 | 86.8 | 34.6 | 66.5 | | roberta-eus-euscrawl-large-cased | **77.6** | 78.8 | 62.9 | **87.2** | **38.3** | **69.0** | | roberta-eus-mC4-base-cased | 75.3 | **80.4** | 59.1 | 86.0 | 35.2 | 67.2 | | roberta-eus-CC100-base-cased | 76.2 | 78.8 | **63.4** | 85.2 | 35.8 | 67.9 | If you use any of these models, please cite the following paper: ``` @misc{artetxe2022euscrawl, title={Does corpus quality really matter for low-resource languages?}, author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de-Viรฑaspre, Aitor Soroa}, year={2022}, eprint={2203.08111}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ixa-ehu/roberta-eus-euscrawl-base-cased
ixa-ehu
2022-03-16T11:48:42Z
14
2
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "basque", "eu", "arxiv:2203.08111", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-16T09:54:43Z
--- language: eu license: cc-by-nc-4.0 tags: - basque - roberta --- # Roberta-eus Euscrawl base cased This is a RoBERTa model for Basque model presented in [Does corpus quality really matter for low-resource languages?](https://arxiv.org/abs/2203.08111). There are several models for Basque using the RoBERTa architecture, which are pre-trained using different corpora: - roberta-eus-euscrawl-base-cased: Basque RoBERTa trained on Euscrawl, a corpus created using tailored crawling from Basque sites. EusCrawl contains 12,528k documents and 423M tokens. - roberta-eus-euscrawl-large-cased: Basque RoBERTa large trained on EusCrawl. - roberta-eus-mC4-base-cased: Basque RoBERTa trained on the Basque portion of mc4 dataset. - roberta-eus-CC100-base-cased: Basque RoBERTa trained on Basque portion of cc100 dataset. The models have been tested on five different downstream tasks for Basque: Topic classification, Sentiment analysis, Stance detection, Named Entity Recognition (NER), and Question Answering (refer to the [paper](https://arxiv.org/abs/2203.08111) for more details). See summary of results below: | Model | Topic class. | Sentiment | Stance det. | NER | QA | Average | |----------------------------------|--------------|-----------|-------------|----------|----------|----------| | roberta-eus-euscrawl-base-cased | 76.2 | 77.7 | 57.4 | 86.8 | 34.6 | 66.5 | | roberta-eus-euscrawl-large-cased | **77.6** | 78.8 | 62.9 | **87.2** | **38.3** | **69.0** | | roberta-eus-mC4-base-cased | 75.3 | **80.4** | 59.1 | 86.0 | 35.2 | 67.2 | | roberta-eus-CC100-base-cased | 76.2 | 78.8 | **63.4** | 85.2 | 35.8 | 67.9 | If you use any of these models, please cite the following paper: ``` @misc{artetxe2022euscrawl, title={Does corpus quality really matter for low-resource languages?}, author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de-Viรฑaspre, Aitor Soroa}, year={2022}, eprint={2203.08111}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
tae898/emoberta-base
tae898
2022-03-16T11:01:29Z
124
5
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "emoberta", "en", "dataset:MELD", "dataset:IEMOCAP", "arxiv:2108.12009", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-14T20:03:08Z
--- language: en tags: - emoberta - roberta license: mit datasets: - MELD - IEMOCAP --- Check https://github.com/tae898/erc for the details [Watch a demo video!](https://youtu.be/qbr7fNd6J28) # Emotion Recognition in Coversation (ERC) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/emoberta-speaker-aware-emotion-recognition-in/emotion-recognition-in-conversation-on)](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on?p=emoberta-speaker-aware-emotion-recognition-in) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/emoberta-speaker-aware-emotion-recognition-in/emotion-recognition-in-conversation-on-meld)](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=emoberta-speaker-aware-emotion-recognition-in) At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP) ## Prerequisites 1. An x86-64 Unix or Unix-like machine 1. Python 3.8 or higher 1. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python. 1. [`multimodal-datasets` repo](https://github.com/tae898/multimodal-datasets) (submodule) 1. pip install -r requirements.txt ## EmoBERTa training First configure the hyper parameters and the dataset in `train-erc-text.yaml` and then, In this directory run the below commands. I recommend you to run this in a virtualenv. ```sh python train-erc-text.py ``` This will subsequently call `train-erc-text-hp.py` and `train-erc-text-full.py`. ## Results on the test split (weighted f1 scores) | Model | | MELD | IEMOCAP | | -------- | ------------------------------- | :-------: | :-------: | | EmoBERTa | No past and future utterances | 63.46 | 56.09 | | | Only past utterances | 64.55 | **68.57** | | | Only future utterances | 64.23 | 66.56 | | | Both past and future utterances | **65.61** | 67.42 | | | โ†’ *without speaker names* | 65.07 | 64.02 | Above numbers are the mean values of five random seed runs. If you want to see more training test details, check out `./results/` If you want to download the trained checkpoints and stuff, then [here](https://surfdrive.surf.nl/files/index.php/s/khREwk4MUI7MSnO/download) is where you can download them. It's a pretty big zip file. ## Deployment ### Huggingface We have released our models on huggingface: - [emoberta-base](https://huggingface.co/tae898/emoberta-base) - [emoberta-large](https://huggingface.co/tae898/emoberta-large) They are based on [RoBERTa-base](https://huggingface.co/roberta-base) and [RoBERTa-large](https://huggingface.co/roberta-large), respectively. They were trained on [both MELD and IEMOCAP datasets](utterance-ordered-MELD_IEMOCAP.json). Our deployed models are neither speaker-aware nor take previous utterances into account, meaning that it only classifies one utterance at a time without the speaker information (e.g., "I love you"). ### Flask app You can either run the Flask RESTful server app as a docker container or just as a python script. 1. Running the app as a docker container **(recommended)**. There are four images. Take what you need: - `docker run -it --rm -p 10006:10006 tae898/emoberta-base` - `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-base-cuda` - `docker run -it --rm -p 10006:10006 tae898/emoberta-large` - `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-large-cuda` 1. Running the app in your python environment: This method is less recommended than the docker one. Run `pip install -r requirements-deploy.txt` first.<br> The [`app.py`](app.py) is a flask RESTful server. The usage is below: ```console app.py [-h] [--host HOST] [--port PORT] [--device DEVICE] [--model-type MODEL_TYPE] ``` For example: ```sh python app.py --host 0.0.0.0 --port 10006 --device cpu --model-type emoberta-base ``` ### Client Once the app is running, you can send a text to the server. First install the necessary packages: `pip install -r requirements-client.txt`, and the run the [client.py](client.py). The usage is as below: ```console client.py [-h] [--url-emoberta URL_EMOBERTA] --text TEXT ``` For example: ```sh python client.py --text "Emotion recognition is so cool\!" ``` will give you: ```json { "neutral": 0.0049800905, "joy": 0.96399665, "surprise": 0.018937444, "anger": 0.0071516023, "sadness": 0.002021492, "disgust": 0.001495996, "fear": 0.0014167271 } ``` ## Troubleshooting The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive. ## Contributing Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**. 1. Fork the Project 1. Create your Feature Branch (`git checkout -b feature/AmazingFeature`) 1. Run `make style && quality` in the root repo directory, to ensure code quality. 1. Commit your Changes (`git commit -m 'Add some AmazingFeature'`) 1. Push to the Branch (`git push origin feature/AmazingFeature`) 1. Open a Pull Request ## Cite our work Check out the [paper](https://arxiv.org/abs/2108.12009). ```bibtex @misc{kim2021emoberta, title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa}, author={Taewoon Kim and Piek Vossen}, year={2021}, eprint={2108.12009}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` [![DOI](https://zenodo.org/badge/328375452.svg)](https://zenodo.org/badge/latestdoi/328375452)<br> ## Authors - [Taewoon Kim](https://taewoonkim.com/) ## License [MIT](https://choosealicense.com/licenses/mit/)
fabianrausch/german-financial-statements-bert
fabianrausch
2022-03-16T09:58:56Z
173
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-05T14:26:11Z
--- license: mit language: de --- # german-financial-statements-bert This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) using German financial statements. It achieves the following results on the evaluation set: - Loss: 1.2025 - Accuracy: 0.7376 - Perplexity: 3.3285 ## Model description Annual financial statements in Germany are published in the Federal Gazette and are freely accessible. The documents describe the entrepreneurial and in particular the financial situation of a company with reference to a reporting period. The german-financial-statements-bert model aims to provide a BERT model specifically for this domain. ## Training and evaluation data The training was performed with 100,000 natural language sentences from annual financial statements. 50,000 of these sentences were taken unfiltered and randomly from 5,500 different financial statement documents, and another 50,000 were also taken randomly from 5,500 different financial statement documents, but this half was filtered so that only sentences referring to a financial entity were selected. Specifically, this means that the second half of the sentences contains an indicator for a reference to a financial entity (EUR, Euro, TEUR, โ‚ฌ, Tโ‚ฌ). The evaluation was carried out with 20,000 sentences of the same origin and distribution. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
navteca/nli-deberta-v3-xsmall
navteca
2022-03-16T09:49:34Z
18
1
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "microsoft/deberta-v3-xsmall", "zero-shot-classification", "en", "dataset:multi_nli", "dataset:snli", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
2022-03-16T09:37:56Z
--- datasets: - multi_nli - snli language: en license: apache-2.0 metrics: - accuracy pipeline_tag: zero-shot-classification tags: - microsoft/deberta-v3-xsmall --- # Cross-Encoder for Natural Language Inference This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) ## Training Data The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Performance - Accuracy on SNLI-test dataset: 91.64 - Accuracy on MNLI mismatched set: 87.77 For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli). ## Usage Pre-trained models can be used like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/nli-deberta-v3-xsmall') scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')]) #Convert scores to labels label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)] ``` ## Usage with Transformers AutoModel You can use the model also directly with Transformers library (without SentenceTransformers library): ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-xsmall') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-xsmall') features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ``` ## Zero-Shot Classification This model can also be used for zero-shot-classification: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-xsmall') sent = "Apple just announced the newest iPhone X" candidate_labels = ["technology", "sports", "politics"] res = classifier(sent, candidate_labels) print(res) ```
navteca/ms-marco-MiniLM-L-6-v2
navteca
2022-03-16T09:36:49Z
103,091
2
sentence-transformers
[ "sentence-transformers", "pytorch", "jax", "bert", "text-classification", "en", "license:mit", "region:us" ]
text-classification
2022-03-16T09:26:53Z
--- language: en license: mit pipeline_tag: text-classification tags: - sentence-transformers --- # Cross-Encoder for MS Marco The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Training Data This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. ## Usage The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
datarpit/distilbert-base-uncased-finetuned-natural-questions
datarpit
2022-03-16T07:52:09Z
91
3
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:natural_questions", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-08T20:12:53Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - natural_questions model-index: - name: distilbert-base-uncased-finetuned-natural-questions results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-natural-questions This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the natural_questions dataset. It achieves the following results on the evaluation set: - Loss: 0.6267 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 40 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 2.0532 | 1.0 | 5104 | 0.2393 | | 1.8912 | 2.0 | 10208 | 0.2284 | | 1.7854 | 3.0 | 15312 | 0.2357 | | 1.6856 | 4.0 | 20416 | 0.2487 | | 1.5918 | 5.0 | 25520 | 0.2743 | | 1.5067 | 6.0 | 30624 | 0.2586 | | 1.4323 | 7.0 | 35728 | 0.2763 | | 1.365 | 8.0 | 40832 | 0.2753 | | 1.3162 | 9.0 | 45936 | 0.3200 | | 1.281 | 10.0 | 51040 | 0.3127 | | 1.308 | 11.0 | 57104 | 0.2947 | | 1.241 | 12.0 | 62208 | 0.2941 | | 1.1391 | 13.0 | 67312 | 0.3103 | | 1.0334 | 14.0 | 72416 | 0.3694 | | 0.9538 | 15.0 | 77520 | 0.3658 | | 0.8749 | 16.0 | 82624 | 0.4009 | | 0.8154 | 17.0 | 87728 | 0.3672 | | 0.7533 | 18.0 | 92832 | 0.3675 | | 0.7079 | 19.0 | 97936 | 0.4611 | | 0.6658 | 20.0 | 103040 | 0.4222 | | 0.595 | 21.0 | 108144 | 0.4095 | | 0.5765 | 22.0 | 113248 | 0.4400 | | 0.5259 | 23.0 | 118352 | 0.5109 | | 0.4804 | 24.0 | 123456 | 0.4711 | | 0.4389 | 25.0 | 128560 | 0.5072 | | 0.4034 | 26.0 | 133664 | 0.5363 | | 0.374 | 27.0 | 138768 | 0.5460 | | 0.3434 | 28.0 | 143872 | 0.5627 | | 0.3181 | 29.0 | 148976 | 0.5657 | | 0.2971 | 30.0 | 154080 | 0.5819 | | 0.275 | 31.0 | 159184 | 0.5649 | | 0.2564 | 32.0 | 164288 | 0.6087 | | 0.2431 | 33.0 | 169392 | 0.6137 | | 0.2289 | 34.0 | 174496 | 0.6123 | | 0.2151 | 35.0 | 179600 | 0.5979 | | 0.2041 | 36.0 | 184704 | 0.6196 | | 0.1922 | 37.0 | 189808 | 0.6191 | | 0.1852 | 38.0 | 194912 | 0.6313 | | 0.1718 | 39.0 | 200016 | 0.6234 | | 0.1718 | 39.81 | 204160 | 0.6267 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0 - Datasets 1.18.4 - Tokenizers 0.11.6
ScandinavianMrT/gpt2_prefinetune_SARC_1epoch_withcontext
ScandinavianMrT
2022-03-16T07:23:51Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-16T06:24:23Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2_prefinetune_SARC_1epoch_withcontext results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2_prefinetune_SARC_1epoch_withcontext This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.8788 | 1.0 | 14028 | 3.7899 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
Ravikantcool2022/Ethereum.wiki
Ravikantcool2022
2022-03-16T05:08:13Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-16T05:08:13Z
--- license: apache-2.0 ---
lijingxin/bert-base-uncased-issues-128
lijingxin
2022-03-16T03:19:04Z
4
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-15T15:32:40Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-issues-128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2540 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0981 | 1.0 | 291 | 1.6917 | | 1.6493 | 2.0 | 582 | 1.4357 | | 1.4831 | 3.0 | 873 | 1.3923 | | 1.3957 | 4.0 | 1164 | 1.4056 | | 1.3339 | 5.0 | 1455 | 1.1944 | | 1.2936 | 6.0 | 1746 | 1.2888 | | 1.2458 | 7.0 | 2037 | 1.2715 | | 1.2004 | 8.0 | 2328 | 1.1992 | | 1.1785 | 9.0 | 2619 | 1.1726 | | 1.1389 | 10.0 | 2910 | 1.2157 | | 1.1313 | 11.0 | 3201 | 1.1977 | | 1.0935 | 12.0 | 3492 | 1.1794 | | 1.0826 | 13.0 | 3783 | 1.2260 | | 1.0729 | 14.0 | 4074 | 1.1549 | | 1.0599 | 15.0 | 4365 | 1.1269 | | 1.0538 | 16.0 | 4656 | 1.2540 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2 - Datasets 1.16.1 - Tokenizers 0.10.3
kSaluja/roberta-finetuned-ner-without-data-sort
kSaluja
2022-03-16T01:27:44Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-16T00:41:56Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta-finetuned-ner-without-data-sort results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-ner-without-data-sort This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0420 - Precision: 0.9914 - Recall: 0.9909 - F1: 0.9912 - Accuracy: 0.9920 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.1879 | 0.9378 | 0.9414 | 0.9396 | 0.9493 | | No log | 2.0 | 426 | 0.1038 | 0.9725 | 0.9750 | 0.9737 | 0.9751 | | 0.4424 | 3.0 | 639 | 0.0701 | 0.9861 | 0.9851 | 0.9856 | 0.9863 | | 0.4424 | 4.0 | 852 | 0.0637 | 0.9882 | 0.9880 | 0.9881 | 0.9880 | | 0.0675 | 5.0 | 1065 | 0.0546 | 0.9851 | 0.9878 | 0.9865 | 0.9879 | | 0.0675 | 6.0 | 1278 | 0.0480 | 0.9894 | 0.9904 | 0.9899 | 0.9901 | | 0.0675 | 7.0 | 1491 | 0.0473 | 0.9919 | 0.9904 | 0.9912 | 0.9911 | | 0.0426 | 8.0 | 1704 | 0.0441 | 0.9921 | 0.9916 | 0.9919 | 0.9921 | | 0.0426 | 9.0 | 1917 | 0.0426 | 0.9921 | 0.9916 | 0.9919 | 0.9922 | | 0.033 | 10.0 | 2130 | 0.0420 | 0.9914 | 0.9909 | 0.9912 | 0.9920 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
sap-ai-research/RoBERTa-base-SCD-ACL2022
sap-ai-research
2022-03-16T00:41:41Z
7
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "license:apache-2.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-15T23:32:07Z
--- license: apache-2.0 ---
golivaresm/roberta-base-bne-finetuned-amazon_reviews_multi
golivaresm
2022-03-16T00:34:07Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-15T23:34:51Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-amazon_reviews_multi results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metrics: - name: Accuracy type: accuracy value: 0.93125 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2328 - Accuracy: 0.9313 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1985 | 1.0 | 1250 | 0.1730 | 0.9327 | | 0.0982 | 2.0 | 2500 | 0.2328 | 0.9313 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
kSaluja/roberta-finetuned-ner
kSaluja
2022-03-16T00:00:41Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-15T23:20:13Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-ner This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1322 - Precision: 0.9772 - Recall: 0.9782 - F1: 0.9777 - Accuracy: 0.9767 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 253 | 0.1694 | 0.9636 | 0.9555 | 0.9595 | 0.9617 | | 0.4479 | 2.0 | 506 | 0.1374 | 0.9743 | 0.9762 | 0.9752 | 0.9743 | | 0.4479 | 3.0 | 759 | 0.1322 | 0.9772 | 0.9782 | 0.9777 | 0.9767 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
AnnaR/literature_summarizer
AnnaR
2022-03-15T23:54:39Z
5
0
transformers
[ "transformers", "tf", "bart", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-15T23:47:38Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: AnnaR/literature_summarizer results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # AnnaR/literature_summarizer This model is a fine-tuned version of [sshleifer/distilbart-xsum-1-1](https://huggingface.co/sshleifer/distilbart-xsum-1-1) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.2180 - Validation Loss: 4.7198 - Epoch: 10 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 5300, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.1} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 5.6694 | 5.0234 | 0 | | 4.9191 | 4.8161 | 1 | | 4.5770 | 4.7170 | 2 | | 4.3268 | 4.6571 | 3 | | 4.1073 | 4.6296 | 4 | | 3.9225 | 4.6279 | 5 | | 3.7564 | 4.6288 | 6 | | 3.5989 | 4.6731 | 7 | | 3.4611 | 4.6767 | 8 | | 3.3356 | 4.6934 | 9 | | 3.2180 | 4.7198 | 10 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.8.0 - Datasets 2.0.0 - Tokenizers 0.11.6
krinal214/bert-3lang
krinal214
2022-03-15T23:30:47Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:tydiqa", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-15T23:17:42Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - tydiqa model-index: - name: bert-3lang results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-3lang This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tydiqa dataset. It achieves the following results on the evaluation set: - Loss: 0.6422 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8161 | 1.0 | 905 | 0.6422 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 2.0.0 - Tokenizers 0.10.3
responsibility-framing/predict-perception-xlmr-focus-concept
responsibility-framing
2022-03-15T23:28:40Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-15T23:23:34Z
--- license: mit tags: - generated_from_trainer model-index: - name: predict-perception-xlmr-focus-concept results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-xlmr-focus-concept This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8296 - Rmse: 1.0302 - Rmse Focus::a Su un concetto astratto o un'emozione: 1.0302 - Mae: 0.7515 - Mae Focus::a Su un concetto astratto o un'emozione: 0.7515 - R2: 0.1804 - R2 Focus::a Su un concetto astratto o un'emozione: 0.1804 - Cos: 0.4783 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.3415 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Su un concetto astratto o un'emozione | Mae | Mae Focus::a Su un concetto astratto o un'emozione | R2 | R2 Focus::a Su un concetto astratto o un'emozione | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------------------------------:|:------:|:--------------------------------------------------:|:-------:|:-------------------------------------------------:|:------:|:----:|:----:|:---------:|:---:| | 1.0355 | 1.0 | 15 | 0.9822 | 1.1209 | 1.1209 | 0.9649 | 0.9649 | 0.0296 | 0.0296 | 0.2174 | 0.0 | 0.5 | 0.3706 | nan | | 1.0083 | 2.0 | 30 | 1.1378 | 1.2065 | 1.2065 | 0.9954 | 0.9954 | -0.1241 | -0.1241 | 0.2174 | 0.0 | 0.5 | 0.3309 | nan | | 0.9823 | 3.0 | 45 | 0.9669 | 1.1121 | 1.1121 | 0.9315 | 0.9315 | 0.0448 | 0.0448 | 0.3043 | 0.0 | 0.5 | 0.3810 | nan | | 0.9468 | 4.0 | 60 | 0.8856 | 1.0644 | 1.0644 | 0.8584 | 0.8584 | 0.1251 | 0.1251 | 0.3913 | 0.0 | 0.5 | 0.3803 | nan | | 0.9294 | 5.0 | 75 | 0.8136 | 1.0202 | 1.0202 | 0.8396 | 0.8396 | 0.1963 | 0.1963 | 0.6522 | 0.0 | 0.5 | 0.4727 | nan | | 0.881 | 6.0 | 90 | 0.7634 | 0.9882 | 0.9882 | 0.8192 | 0.8192 | 0.2458 | 0.2458 | 0.6522 | 0.0 | 0.5 | 0.4727 | nan | | 0.7589 | 7.0 | 105 | 0.8139 | 1.0204 | 1.0204 | 0.8136 | 0.8136 | 0.1960 | 0.1960 | 0.5652 | 0.0 | 0.5 | 0.4120 | nan | | 0.7217 | 8.0 | 120 | 0.9105 | 1.0792 | 1.0792 | 0.9394 | 0.9394 | 0.1005 | 0.1005 | 0.3913 | 0.0 | 0.5 | 0.4108 | nan | | 0.8059 | 9.0 | 135 | 1.0322 | 1.1491 | 1.1491 | 0.9115 | 0.9115 | -0.0197 | -0.0197 | 0.5652 | 0.0 | 0.5 | 0.3738 | nan | | 0.6483 | 10.0 | 150 | 0.7989 | 1.0109 | 1.0109 | 0.7899 | 0.7899 | 0.2108 | 0.2108 | 0.6522 | 0.0 | 0.5 | 0.4727 | nan | | 0.5725 | 11.0 | 165 | 0.7175 | 0.9581 | 0.9581 | 0.7011 | 0.7011 | 0.2912 | 0.2912 | 0.5652 | 0.0 | 0.5 | 0.3738 | nan | | 0.5091 | 12.0 | 180 | 0.8818 | 1.0621 | 1.0621 | 0.8775 | 0.8775 | 0.1289 | 0.1289 | 0.5652 | 0.0 | 0.5 | 0.4063 | nan | | 0.4526 | 13.0 | 195 | 0.8451 | 1.0398 | 1.0398 | 0.7990 | 0.7990 | 0.1651 | 0.1651 | 0.5652 | 0.0 | 0.5 | 0.4063 | nan | | 0.361 | 14.0 | 210 | 0.8632 | 1.0508 | 1.0508 | 0.8124 | 0.8124 | 0.1472 | 0.1472 | 0.4783 | 0.0 | 0.5 | 0.3699 | nan | | 0.3582 | 15.0 | 225 | 0.8461 | 1.0404 | 1.0404 | 0.7923 | 0.7923 | 0.1641 | 0.1641 | 0.3913 | 0.0 | 0.5 | 0.3672 | nan | | 0.2945 | 16.0 | 240 | 0.9142 | 1.0814 | 1.0814 | 0.8125 | 0.8125 | 0.0968 | 0.0968 | 0.3913 | 0.0 | 0.5 | 0.3672 | nan | | 0.2891 | 17.0 | 255 | 0.8377 | 1.0352 | 1.0352 | 0.7718 | 0.7718 | 0.1724 | 0.1724 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.2569 | 18.0 | 270 | 0.8106 | 1.0183 | 1.0183 | 0.7481 | 0.7481 | 0.1992 | 0.1992 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.2583 | 19.0 | 285 | 0.8239 | 1.0266 | 1.0266 | 0.7597 | 0.7597 | 0.1861 | 0.1861 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.2217 | 20.0 | 300 | 0.8485 | 1.0419 | 1.0419 | 0.7663 | 0.7663 | 0.1617 | 0.1617 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.1927 | 21.0 | 315 | 0.8304 | 1.0307 | 1.0307 | 0.7536 | 0.7536 | 0.1797 | 0.1797 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.176 | 22.0 | 330 | 0.8321 | 1.0317 | 1.0317 | 0.7539 | 0.7539 | 0.1780 | 0.1780 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.1639 | 23.0 | 345 | 0.7914 | 1.0062 | 1.0062 | 0.7460 | 0.7460 | 0.2182 | 0.2182 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.177 | 24.0 | 360 | 0.8619 | 1.0500 | 1.0500 | 0.7725 | 0.7725 | 0.1486 | 0.1486 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.1473 | 25.0 | 375 | 0.8101 | 1.0180 | 1.0180 | 0.7587 | 0.7587 | 0.1997 | 0.1997 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.181 | 26.0 | 390 | 0.8038 | 1.0141 | 1.0141 | 0.7433 | 0.7433 | 0.2059 | 0.2059 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.1679 | 27.0 | 405 | 0.7982 | 1.0105 | 1.0105 | 0.7248 | 0.7248 | 0.2115 | 0.2115 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.1529 | 28.0 | 420 | 0.8282 | 1.0293 | 1.0293 | 0.7454 | 0.7454 | 0.1818 | 0.1818 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.1822 | 29.0 | 435 | 0.8310 | 1.0311 | 1.0311 | 0.7512 | 0.7512 | 0.1790 | 0.1790 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | | 0.1442 | 30.0 | 450 | 0.8296 | 1.0302 | 1.0302 | 0.7515 | 0.7515 | 0.1804 | 0.1804 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
responsibility-framing/predict-perception-xlmr-focus-object
responsibility-framing
2022-03-15T23:23:19Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-15T23:19:04Z
--- license: mit tags: - generated_from_trainer model-index: - name: predict-perception-xlmr-focus-object results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-xlmr-focus-object This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1927 - Rmse: 0.5495 - Rmse Focus::a Su un oggetto: 0.5495 - Mae: 0.4174 - Mae Focus::a Su un oggetto: 0.4174 - R2: 0.5721 - R2 Focus::a Su un oggetto: 0.5721 - Cos: 0.5652 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.5518 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Su un oggetto | Mae | Mae Focus::a Su un oggetto | R2 | R2 Focus::a Su un oggetto | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------:|:------:|:--------------------------:|:-------:|:-------------------------:|:-------:|:----:|:----:|:---------:|:---:| | 1.0316 | 1.0 | 15 | 0.6428 | 1.0035 | 1.0035 | 0.8806 | 0.8806 | -0.4272 | -0.4272 | -0.4783 | 0.0 | 0.5 | 0.5302 | nan | | 1.0005 | 2.0 | 30 | 0.4564 | 0.8456 | 0.8456 | 0.7078 | 0.7078 | -0.0134 | -0.0134 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan | | 0.9519 | 3.0 | 45 | 0.4151 | 0.8063 | 0.8063 | 0.6797 | 0.6797 | 0.0784 | 0.0784 | 0.1304 | 0.0 | 0.5 | 0.4888 | nan | | 0.92 | 4.0 | 60 | 0.3982 | 0.7898 | 0.7898 | 0.6516 | 0.6516 | 0.1159 | 0.1159 | 0.2174 | 0.0 | 0.5 | 0.5036 | nan | | 0.8454 | 5.0 | 75 | 0.2739 | 0.6550 | 0.6550 | 0.5292 | 0.5292 | 0.3919 | 0.3919 | 0.6522 | 0.0 | 0.5 | 0.4160 | nan | | 0.7247 | 6.0 | 90 | 0.2413 | 0.6148 | 0.6148 | 0.5347 | 0.5347 | 0.4642 | 0.4642 | 0.4783 | 0.0 | 0.5 | 0.3453 | nan | | 0.6055 | 7.0 | 105 | 0.3109 | 0.6978 | 0.6978 | 0.6115 | 0.6115 | 0.3098 | 0.3098 | 0.4783 | 0.0 | 0.5 | 0.4154 | nan | | 0.5411 | 8.0 | 120 | 0.3932 | 0.7848 | 0.7848 | 0.6712 | 0.6712 | 0.1271 | 0.1271 | 0.4783 | 0.0 | 0.5 | 0.4154 | nan | | 0.4784 | 9.0 | 135 | 0.1316 | 0.4540 | 0.4540 | 0.3750 | 0.3750 | 0.7079 | 0.7079 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.4039 | 10.0 | 150 | 0.2219 | 0.5896 | 0.5896 | 0.4954 | 0.4954 | 0.5074 | 0.5074 | 0.5652 | 0.0 | 0.5 | 0.4838 | nan | | 0.3415 | 11.0 | 165 | 0.1935 | 0.5505 | 0.5505 | 0.4443 | 0.4443 | 0.5704 | 0.5704 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.3369 | 12.0 | 180 | 0.2118 | 0.5761 | 0.5761 | 0.4554 | 0.4554 | 0.5296 | 0.5296 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.3083 | 13.0 | 195 | 0.1928 | 0.5496 | 0.5496 | 0.4368 | 0.4368 | 0.5718 | 0.5718 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.2678 | 14.0 | 210 | 0.2205 | 0.5877 | 0.5877 | 0.4472 | 0.4472 | 0.5105 | 0.5105 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.2199 | 15.0 | 225 | 0.2118 | 0.5760 | 0.5760 | 0.4689 | 0.4689 | 0.5297 | 0.5297 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.2238 | 16.0 | 240 | 0.2461 | 0.6209 | 0.6209 | 0.5047 | 0.5047 | 0.4537 | 0.4537 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.2233 | 17.0 | 255 | 0.2307 | 0.6011 | 0.6011 | 0.4618 | 0.4618 | 0.4879 | 0.4879 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.1903 | 18.0 | 270 | 0.2207 | 0.5880 | 0.5880 | 0.4432 | 0.4432 | 0.5100 | 0.5100 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan | | 0.1714 | 19.0 | 285 | 0.2146 | 0.5798 | 0.5798 | 0.4368 | 0.4368 | 0.5236 | 0.5236 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.1759 | 20.0 | 300 | 0.1745 | 0.5228 | 0.5228 | 0.4152 | 0.4152 | 0.6126 | 0.6126 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.1505 | 21.0 | 315 | 0.1944 | 0.5519 | 0.5519 | 0.4170 | 0.4170 | 0.5684 | 0.5684 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.1467 | 22.0 | 330 | 0.1802 | 0.5313 | 0.5313 | 0.3910 | 0.3910 | 0.5999 | 0.5999 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan | | 0.1441 | 23.0 | 345 | 0.2360 | 0.6081 | 0.6081 | 0.4755 | 0.4755 | 0.4760 | 0.4760 | 0.4783 | 0.0 | 0.5 | 0.4938 | nan | | 0.1553 | 24.0 | 360 | 0.2129 | 0.5774 | 0.5774 | 0.4539 | 0.4539 | 0.5274 | 0.5274 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan | | 0.1163 | 25.0 | 375 | 0.1780 | 0.5281 | 0.5281 | 0.3952 | 0.3952 | 0.6048 | 0.6048 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan | | 0.1266 | 26.0 | 390 | 0.2163 | 0.5821 | 0.5821 | 0.4569 | 0.4569 | 0.5198 | 0.5198 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan | | 0.1416 | 27.0 | 405 | 0.1829 | 0.5352 | 0.5352 | 0.4082 | 0.4082 | 0.5939 | 0.5939 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan | | 0.1576 | 28.0 | 420 | 0.1930 | 0.5498 | 0.5498 | 0.4126 | 0.4126 | 0.5716 | 0.5716 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan | | 0.118 | 29.0 | 435 | 0.2070 | 0.5694 | 0.5694 | 0.4378 | 0.4378 | 0.5405 | 0.5405 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan | | 0.1179 | 30.0 | 450 | 0.1927 | 0.5495 | 0.5495 | 0.4174 | 0.4174 | 0.5721 | 0.5721 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
kSaluja/bert-finetuned-ner
kSaluja
2022-03-15T23:18:41Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-15T22:50:38Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1555 - Precision: 0.9681 - Recall: 0.9670 - F1: 0.9675 - Accuracy: 0.9687 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 253 | 0.1972 | 0.9467 | 0.9408 | 0.9437 | 0.9511 | | 0.3572 | 2.0 | 506 | 0.1626 | 0.9677 | 0.9614 | 0.9645 | 0.9661 | | 0.3572 | 3.0 | 759 | 0.1555 | 0.9681 | 0.9670 | 0.9675 | 0.9687 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
responsibility-framing/predict-perception-xlmr-focus-assassin
responsibility-framing
2022-03-15T23:13:17Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-15T23:08:52Z
--- license: mit tags: - generated_from_trainer model-index: - name: predict-perception-xlmr-focus-assassin results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-xlmr-focus-assassin This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3264 - Rmse: 0.9437 - Rmse Focus::a Sull'assassino: 0.9437 - Mae: 0.7093 - Mae Focus::a Sull'assassino: 0.7093 - R2: 0.6145 - R2 Focus::a Sull'assassino: 0.6145 - Cos: 0.7391 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.6131 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Sull'assassino | Mae | Mae Focus::a Sull'assassino | R2 | R2 Focus::a Sull'assassino | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------:|:------:|:---------------------------:|:-------:|:--------------------------:|:-------:|:----:|:----:|:---------:|:---:| | 1.0403 | 1.0 | 15 | 1.1576 | 1.7771 | 1.7771 | 1.6028 | 1.6028 | -0.3670 | -0.3670 | -0.2174 | 0.0 | 0.5 | 0.2379 | nan | | 0.9818 | 2.0 | 30 | 0.8916 | 1.5596 | 1.5596 | 1.4136 | 1.4136 | -0.0529 | -0.0529 | 0.3913 | 0.0 | 0.5 | 0.3793 | nan | | 0.9276 | 3.0 | 45 | 0.9277 | 1.5909 | 1.5909 | 1.4560 | 1.4560 | -0.0955 | -0.0955 | 0.3913 | 0.0 | 0.5 | 0.3742 | nan | | 0.8395 | 4.0 | 60 | 0.7958 | 1.4734 | 1.4734 | 1.3032 | 1.3032 | 0.0603 | 0.0603 | 0.5652 | 0.0 | 0.5 | 0.4598 | nan | | 0.7587 | 5.0 | 75 | 0.4647 | 1.1259 | 1.1259 | 0.9316 | 0.9316 | 0.4513 | 0.4513 | 0.6522 | 0.0 | 0.5 | 0.5087 | nan | | 0.696 | 6.0 | 90 | 0.5368 | 1.2101 | 1.2101 | 1.0847 | 1.0847 | 0.3661 | 0.3661 | 0.7391 | 0.0 | 0.5 | 0.5302 | nan | | 0.548 | 7.0 | 105 | 0.3110 | 0.9211 | 0.9211 | 0.7896 | 0.7896 | 0.6328 | 0.6328 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan | | 0.4371 | 8.0 | 120 | 0.3392 | 0.9619 | 0.9619 | 0.8132 | 0.8132 | 0.5995 | 0.5995 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan | | 0.355 | 9.0 | 135 | 0.3938 | 1.0366 | 1.0366 | 0.8153 | 0.8153 | 0.5349 | 0.5349 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.2919 | 10.0 | 150 | 0.3484 | 0.9749 | 0.9749 | 0.7487 | 0.7487 | 0.5886 | 0.5886 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.2595 | 11.0 | 165 | 0.2812 | 0.8759 | 0.8759 | 0.6265 | 0.6265 | 0.6679 | 0.6679 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.2368 | 12.0 | 180 | 0.2534 | 0.8314 | 0.8314 | 0.6402 | 0.6402 | 0.7008 | 0.7008 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.227 | 13.0 | 195 | 0.2878 | 0.8861 | 0.8861 | 0.6769 | 0.6769 | 0.6601 | 0.6601 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.1979 | 14.0 | 210 | 0.2405 | 0.8100 | 0.8100 | 0.6113 | 0.6113 | 0.7160 | 0.7160 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.1622 | 15.0 | 225 | 0.2575 | 0.8382 | 0.8382 | 0.6017 | 0.6017 | 0.6959 | 0.6959 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan | | 0.1575 | 16.0 | 240 | 0.2945 | 0.8963 | 0.8963 | 0.6741 | 0.6741 | 0.6523 | 0.6523 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan | | 0.1479 | 17.0 | 255 | 0.3563 | 0.9859 | 0.9859 | 0.7367 | 0.7367 | 0.5792 | 0.5792 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan | | 0.1269 | 18.0 | 270 | 0.2806 | 0.8750 | 0.8750 | 0.6665 | 0.6665 | 0.6686 | 0.6686 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan | | 0.1257 | 19.0 | 285 | 0.3267 | 0.9441 | 0.9441 | 0.6739 | 0.6739 | 0.6142 | 0.6142 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan | | 0.134 | 20.0 | 300 | 0.3780 | 1.0155 | 1.0155 | 0.7331 | 0.7331 | 0.5536 | 0.5536 | 0.7391 | 0.0 | 0.5 | 0.5302 | nan | | 0.1171 | 21.0 | 315 | 0.3890 | 1.0301 | 1.0301 | 0.7444 | 0.7444 | 0.5406 | 0.5406 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan | | 0.0934 | 22.0 | 330 | 0.3131 | 0.9242 | 0.9242 | 0.6923 | 0.6923 | 0.6303 | 0.6303 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan | | 0.1112 | 23.0 | 345 | 0.2912 | 0.8913 | 0.8913 | 0.6610 | 0.6610 | 0.6561 | 0.6561 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan | | 0.1038 | 24.0 | 360 | 0.3109 | 0.9209 | 0.9209 | 0.7019 | 0.7019 | 0.6329 | 0.6329 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan | | 0.085 | 25.0 | 375 | 0.3469 | 0.9728 | 0.9728 | 0.7383 | 0.7383 | 0.5904 | 0.5904 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan | | 0.0843 | 26.0 | 390 | 0.3017 | 0.9073 | 0.9073 | 0.6848 | 0.6848 | 0.6437 | 0.6437 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.093 | 27.0 | 405 | 0.3269 | 0.9443 | 0.9443 | 0.7042 | 0.7042 | 0.6140 | 0.6140 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0846 | 28.0 | 420 | 0.3161 | 0.9286 | 0.9286 | 0.6937 | 0.6937 | 0.6267 | 0.6267 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0764 | 29.0 | 435 | 0.3244 | 0.9408 | 0.9408 | 0.7079 | 0.7079 | 0.6169 | 0.6169 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | | 0.0697 | 30.0 | 450 | 0.3264 | 0.9437 | 0.9437 | 0.7093 | 0.7093 | 0.6145 | 0.6145 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
responsibility-framing/predict-perception-xlmr-blame-object
responsibility-framing
2022-03-15T22:42:55Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-15T22:38:38Z
--- license: mit tags: - generated_from_trainer model-index: - name: predict-perception-xlmr-blame-object results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-xlmr-blame-object This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7219 - Rmse: 0.6215 - Rmse Blame::a Un oggetto: 0.6215 - Mae: 0.4130 - Mae Blame::a Un oggetto: 0.4130 - R2: 0.1200 - R2 Blame::a Un oggetto: 0.1200 - Cos: 0.3043 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.4335 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Un oggetto | Mae | Mae Blame::a Un oggetto | R2 | R2 Blame::a Un oggetto | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------:|:------:|:-----------------------:|:-------:|:----------------------:|:-------:|:----:|:----:|:---------:|:---:| | 1.0279 | 1.0 | 15 | 0.8483 | 0.6737 | 0.6737 | 0.4761 | 0.4761 | -0.0341 | -0.0341 | -0.3043 | 0.0 | 0.5 | 0.5507 | nan | | 1.0676 | 2.0 | 30 | 0.7749 | 0.6439 | 0.6439 | 0.4291 | 0.4291 | 0.0554 | 0.0554 | 0.0435 | 0.0 | 0.5 | 0.2614 | nan | | 0.9563 | 3.0 | 45 | 0.7765 | 0.6446 | 0.6446 | 0.4349 | 0.4349 | 0.0535 | 0.0535 | -0.0435 | 0.0 | 0.5 | 0.4515 | nan | | 0.9622 | 4.0 | 60 | 0.7443 | 0.6311 | 0.6311 | 0.4061 | 0.4061 | 0.0927 | 0.0927 | 0.1304 | 0.0 | 0.5 | 0.2933 | nan | | 0.948 | 5.0 | 75 | 0.8071 | 0.6571 | 0.6571 | 0.3817 | 0.3817 | 0.0162 | 0.0162 | 0.3043 | 0.0 | 0.5 | 0.4207 | nan | | 0.9532 | 6.0 | 90 | 0.8007 | 0.6546 | 0.6546 | 0.4585 | 0.4585 | 0.0239 | 0.0239 | -0.0435 | 0.0 | 0.5 | 0.5507 | nan | | 0.9101 | 7.0 | 105 | 0.7126 | 0.6175 | 0.6175 | 0.3649 | 0.3649 | 0.1313 | 0.1313 | 0.4783 | 0.0 | 0.5 | 0.6012 | nan | | 0.8369 | 8.0 | 120 | 0.7194 | 0.6204 | 0.6204 | 0.3896 | 0.3896 | 0.1231 | 0.1231 | 0.3913 | 0.0 | 0.5 | 0.3494 | nan | | 0.8062 | 9.0 | 135 | 0.7157 | 0.6188 | 0.6188 | 0.4192 | 0.4192 | 0.1275 | 0.1275 | 0.0435 | 0.0 | 0.5 | 0.3182 | nan | | 0.7344 | 10.0 | 150 | 0.7161 | 0.6190 | 0.6190 | 0.3612 | 0.3612 | 0.1270 | 0.1270 | 0.3043 | 0.0 | 0.5 | 0.6035 | nan | | 0.7439 | 11.0 | 165 | 0.5894 | 0.5616 | 0.5616 | 0.3723 | 0.3723 | 0.2816 | 0.2816 | 0.3043 | 0.0 | 0.5 | 0.3846 | nan | | 0.6241 | 12.0 | 180 | 0.7087 | 0.6158 | 0.6158 | 0.3972 | 0.3972 | 0.1361 | 0.1361 | 0.3043 | 0.0 | 0.5 | 0.3846 | nan | | 0.6123 | 13.0 | 195 | 0.6318 | 0.5814 | 0.5814 | 0.3673 | 0.3673 | 0.2298 | 0.2298 | 0.3913 | 0.0 | 0.5 | 0.4413 | nan | | 0.5364 | 14.0 | 210 | 0.6504 | 0.5899 | 0.5899 | 0.3674 | 0.3674 | 0.2072 | 0.2072 | 0.3043 | 0.0 | 0.5 | 0.3846 | nan | | 0.5586 | 15.0 | 225 | 0.7151 | 0.6186 | 0.6186 | 0.3850 | 0.3850 | 0.1283 | 0.1283 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan | | 0.5133 | 16.0 | 240 | 0.5572 | 0.5460 | 0.5460 | 0.3540 | 0.3540 | 0.3208 | 0.3208 | 0.4783 | 0.0 | 0.5 | 0.5314 | nan | | 0.4193 | 17.0 | 255 | 0.6047 | 0.5688 | 0.5688 | 0.3710 | 0.3710 | 0.2629 | 0.2629 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan | | 0.3504 | 18.0 | 270 | 0.6103 | 0.5714 | 0.5714 | 0.3687 | 0.3687 | 0.2561 | 0.2561 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan | | 0.3328 | 19.0 | 285 | 0.6181 | 0.5751 | 0.5751 | 0.3915 | 0.3915 | 0.2466 | 0.2466 | 0.4783 | 0.0 | 0.5 | 0.5314 | nan | | 0.3276 | 20.0 | 300 | 0.6334 | 0.5822 | 0.5822 | 0.3612 | 0.3612 | 0.2279 | 0.2279 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan | | 0.3271 | 21.0 | 315 | 0.6200 | 0.5760 | 0.5760 | 0.3827 | 0.3827 | 0.2442 | 0.2442 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan | | 0.3139 | 22.0 | 330 | 0.6332 | 0.5821 | 0.5821 | 0.3723 | 0.3723 | 0.2281 | 0.2281 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan | | 0.2872 | 23.0 | 345 | 0.6694 | 0.5985 | 0.5985 | 0.3966 | 0.3966 | 0.1840 | 0.1840 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan | | 0.3617 | 24.0 | 360 | 0.7022 | 0.6130 | 0.6130 | 0.4061 | 0.4061 | 0.1440 | 0.1440 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan | | 0.3227 | 25.0 | 375 | 0.7364 | 0.6277 | 0.6277 | 0.4205 | 0.4205 | 0.1024 | 0.1024 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan | | 0.256 | 26.0 | 390 | 0.6938 | 0.6093 | 0.6093 | 0.3833 | 0.3833 | 0.1543 | 0.1543 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan | | 0.2605 | 27.0 | 405 | 0.7221 | 0.6216 | 0.6216 | 0.4036 | 0.4036 | 0.1198 | 0.1198 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan | | 0.2558 | 28.0 | 420 | 0.6959 | 0.6102 | 0.6102 | 0.3859 | 0.3859 | 0.1518 | 0.1518 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan | | 0.2403 | 29.0 | 435 | 0.7152 | 0.6186 | 0.6186 | 0.4088 | 0.4088 | 0.1281 | 0.1281 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan | | 0.3263 | 30.0 | 450 | 0.7219 | 0.6215 | 0.6215 | 0.4130 | 0.4130 | 0.1200 | 0.1200 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
responsibility-framing/predict-perception-xlmr-blame-victim
responsibility-framing
2022-03-15T22:38:23Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-15T22:33:07Z
--- license: mit tags: - generated_from_trainer model-index: - name: predict-perception-xlmr-blame-victim results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-xlmr-blame-victim This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1098 - Rmse: 0.6801 - Rmse Blame::a La vittima: 0.6801 - Mae: 0.5617 - Mae Blame::a La vittima: 0.5617 - R2: -1.5910 - R2 Blame::a La vittima: -1.5910 - Cos: -0.1304 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.3333 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a La vittima | Mae | Mae Blame::a La vittima | R2 | R2 Blame::a La vittima | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------:|:------:|:-----------------------:|:-------:|:----------------------:|:-------:|:----:|:----:|:---------:|:---:| | 1.0422 | 1.0 | 15 | 0.4952 | 0.4542 | 0.4542 | 0.4095 | 0.4095 | -0.1560 | -0.1560 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan | | 1.0434 | 2.0 | 30 | 0.4851 | 0.4496 | 0.4496 | 0.4054 | 0.4054 | -0.1324 | -0.1324 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan | | 1.038 | 3.0 | 45 | 0.4513 | 0.4337 | 0.4337 | 0.3885 | 0.3885 | -0.0536 | -0.0536 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan | | 1.0151 | 4.0 | 60 | 0.4395 | 0.4280 | 0.4280 | 0.3840 | 0.3840 | -0.0262 | -0.0262 | -0.1304 | 0.0 | 0.5 | 0.2715 | nan | | 0.9727 | 5.0 | 75 | 0.4490 | 0.4325 | 0.4325 | 0.3811 | 0.3811 | -0.0482 | -0.0482 | 0.2174 | 0.0 | 0.5 | 0.3338 | nan | | 0.9733 | 6.0 | 90 | 0.4540 | 0.4349 | 0.4349 | 0.3860 | 0.3860 | -0.0598 | -0.0598 | -0.2174 | 0.0 | 0.5 | 0.3248 | nan | | 0.9396 | 7.0 | 105 | 0.4501 | 0.4331 | 0.4331 | 0.3849 | 0.3849 | -0.0508 | -0.0508 | 0.0435 | 0.0 | 0.5 | 0.2609 | nan | | 0.8759 | 8.0 | 120 | 0.4597 | 0.4377 | 0.4377 | 0.3849 | 0.3849 | -0.0731 | -0.0731 | 0.3043 | 0.0 | 0.5 | 0.3898 | nan | | 0.8768 | 9.0 | 135 | 0.4575 | 0.4366 | 0.4366 | 0.3784 | 0.3784 | -0.0680 | -0.0680 | 0.4783 | 0.0 | 0.5 | 0.4615 | nan | | 0.8312 | 10.0 | 150 | 0.5363 | 0.4727 | 0.4727 | 0.4071 | 0.4071 | -0.2520 | -0.2520 | -0.0435 | 0.0 | 0.5 | 0.2733 | nan | | 0.7296 | 11.0 | 165 | 0.5291 | 0.4696 | 0.4696 | 0.4057 | 0.4057 | -0.2353 | -0.2353 | 0.3043 | 0.0 | 0.5 | 0.3898 | nan | | 0.7941 | 12.0 | 180 | 0.5319 | 0.4708 | 0.4708 | 0.4047 | 0.4047 | -0.2417 | -0.2417 | 0.1304 | 0.0 | 0.5 | 0.3381 | nan | | 0.6486 | 13.0 | 195 | 0.6787 | 0.5318 | 0.5318 | 0.4516 | 0.4516 | -0.5846 | -0.5846 | 0.1304 | 0.0 | 0.5 | 0.3381 | nan | | 0.6241 | 14.0 | 210 | 1.0146 | 0.6502 | 0.6502 | 0.5580 | 0.5580 | -1.3687 | -1.3687 | -0.1304 | 0.0 | 0.5 | 0.3509 | nan | | 0.5868 | 15.0 | 225 | 0.7164 | 0.5464 | 0.5464 | 0.4682 | 0.4682 | -0.6725 | -0.6725 | -0.0435 | 0.0 | 0.5 | 0.3333 | nan | | 0.5305 | 16.0 | 240 | 0.9064 | 0.6146 | 0.6146 | 0.5173 | 0.5173 | -1.1161 | -1.1161 | -0.0435 | 0.0 | 0.5 | 0.3333 | nan | | 0.495 | 17.0 | 255 | 1.3860 | 0.7600 | 0.7600 | 0.6433 | 0.6433 | -2.2358 | -2.2358 | -0.0435 | 0.0 | 0.5 | 0.2935 | nan | | 0.566 | 18.0 | 270 | 0.7618 | 0.5634 | 0.5634 | 0.4730 | 0.4730 | -0.7785 | -0.7785 | 0.0435 | 0.0 | 0.5 | 0.3225 | nan | | 0.4305 | 19.0 | 285 | 0.8849 | 0.6072 | 0.6072 | 0.5048 | 0.5048 | -1.0659 | -1.0659 | -0.0435 | 0.0 | 0.5 | 0.3333 | nan | | 0.5108 | 20.0 | 300 | 0.7376 | 0.5544 | 0.5544 | 0.4716 | 0.4716 | -0.7220 | -0.7220 | 0.0435 | 0.0 | 0.5 | 0.3225 | nan | | 0.44 | 21.0 | 315 | 1.1611 | 0.6956 | 0.6956 | 0.5921 | 0.5921 | -1.7108 | -1.7108 | -0.1304 | 0.0 | 0.5 | 0.3333 | nan | | 0.395 | 22.0 | 330 | 1.3004 | 0.7361 | 0.7361 | 0.6078 | 0.6078 | -2.0360 | -2.0360 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan | | 0.3945 | 23.0 | 345 | 0.9376 | 0.6251 | 0.6251 | 0.5272 | 0.5272 | -1.1890 | -1.1890 | -0.2174 | 0.0 | 0.5 | 0.3188 | nan | | 0.3093 | 24.0 | 360 | 1.3586 | 0.7524 | 0.7524 | 0.6219 | 0.6219 | -2.1719 | -2.1719 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan | | 0.2676 | 25.0 | 375 | 1.2200 | 0.7130 | 0.7130 | 0.5994 | 0.5994 | -1.8484 | -1.8484 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan | | 0.3257 | 26.0 | 390 | 1.2235 | 0.7140 | 0.7140 | 0.5900 | 0.5900 | -1.8564 | -1.8564 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan | | 0.4004 | 27.0 | 405 | 1.0978 | 0.6763 | 0.6763 | 0.5624 | 0.5624 | -1.5629 | -1.5629 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan | | 0.283 | 28.0 | 420 | 1.1454 | 0.6909 | 0.6909 | 0.5697 | 0.5697 | -1.6742 | -1.6742 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan | | 0.3326 | 29.0 | 435 | 1.1214 | 0.6836 | 0.6836 | 0.5646 | 0.5646 | -1.6181 | -1.6181 | -0.1304 | 0.0 | 0.5 | 0.3333 | nan | | 0.2632 | 30.0 | 450 | 1.1098 | 0.6801 | 0.6801 | 0.5617 | 0.5617 | -1.5910 | -1.5910 | -0.1304 | 0.0 | 0.5 | 0.3333 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
responsibility-framing/predict-perception-xlmr-blame-assassin
responsibility-framing
2022-03-15T22:32:51Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-15T22:28:27Z
--- license: mit tags: - generated_from_trainer model-index: - name: predict-perception-xlmr-blame-assassin results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # predict-perception-xlmr-blame-assassin This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4439 - Rmse: 0.9571 - Rmse Blame::a L'assassino: 0.9571 - Mae: 0.7260 - Mae Blame::a L'assassino: 0.7260 - R2: 0.6437 - R2 Blame::a L'assassino: 0.6437 - Cos: 0.7391 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.6287 - Rsa: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 20 - eval_batch_size: 8 - seed: 1996 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a L'assassino | Mae | Mae Blame::a L'assassino | R2 | R2 Blame::a L'assassino | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------------------------:|:------:|:------------------------:|:------:|:-----------------------:|:------:|:----:|:----:|:---------:|:---:| | 1.0317 | 1.0 | 15 | 1.1311 | 1.5278 | 1.5278 | 1.3893 | 1.3893 | 0.0919 | 0.0919 | 0.5652 | 0.0 | 0.5 | 0.4512 | nan | | 0.9475 | 2.0 | 30 | 1.0795 | 1.4926 | 1.4926 | 1.3387 | 1.3387 | 0.1334 | 0.1334 | 0.8261 | 0.0 | 0.5 | 0.6184 | nan | | 0.9146 | 3.0 | 45 | 1.1092 | 1.5130 | 1.5130 | 1.4078 | 1.4078 | 0.1095 | 0.1095 | 0.4783 | 0.0 | 0.5 | 0.3116 | nan | | 0.9539 | 4.0 | 60 | 1.1734 | 1.5561 | 1.5561 | 1.4238 | 1.4238 | 0.0580 | 0.0580 | 0.3913 | 0.0 | 0.5 | 0.3614 | nan | | 0.8665 | 5.0 | 75 | 0.8910 | 1.3560 | 1.3560 | 1.2350 | 1.2350 | 0.2847 | 0.2847 | 0.5652 | 0.0 | 0.5 | 0.4136 | nan | | 0.6564 | 6.0 | 90 | 0.8469 | 1.3220 | 1.3220 | 1.1570 | 1.1570 | 0.3201 | 0.3201 | 0.3913 | 0.0 | 0.5 | 0.3931 | nan | | 0.5241 | 7.0 | 105 | 0.6429 | 1.1519 | 1.1519 | 0.9757 | 0.9757 | 0.4838 | 0.4838 | 0.5652 | 0.0 | 0.5 | 0.4222 | nan | | 0.4589 | 8.0 | 120 | 0.5781 | 1.0923 | 1.0923 | 0.8714 | 0.8714 | 0.5359 | 0.5359 | 0.6522 | 0.0 | 0.5 | 0.4641 | nan | | 0.4043 | 9.0 | 135 | 0.4525 | 0.9664 | 0.9664 | 0.8257 | 0.8257 | 0.6367 | 0.6367 | 0.5652 | 0.0 | 0.5 | 0.4263 | nan | | 0.3498 | 10.0 | 150 | 0.4490 | 0.9627 | 0.9627 | 0.8272 | 0.8272 | 0.6395 | 0.6395 | 0.6522 | 0.0 | 0.5 | 0.5144 | nan | | 0.3505 | 11.0 | 165 | 0.3721 | 0.8763 | 0.8763 | 0.7471 | 0.7471 | 0.7013 | 0.7013 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.3426 | 12.0 | 180 | 0.4117 | 0.9218 | 0.9218 | 0.7477 | 0.7477 | 0.6695 | 0.6695 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.3074 | 13.0 | 195 | 0.3761 | 0.8810 | 0.8810 | 0.7109 | 0.7109 | 0.6981 | 0.6981 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.2261 | 14.0 | 210 | 0.3818 | 0.8877 | 0.8877 | 0.7042 | 0.7042 | 0.6935 | 0.6935 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.2399 | 15.0 | 225 | 0.3893 | 0.8964 | 0.8964 | 0.7108 | 0.7108 | 0.6874 | 0.6874 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.2014 | 16.0 | 240 | 0.4606 | 0.9750 | 0.9750 | 0.8046 | 0.8046 | 0.6302 | 0.6302 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.1937 | 17.0 | 255 | 0.4549 | 0.9689 | 0.9689 | 0.7679 | 0.7679 | 0.6348 | 0.6348 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.1831 | 18.0 | 270 | 0.4113 | 0.9213 | 0.9213 | 0.6746 | 0.6746 | 0.6698 | 0.6698 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.1758 | 19.0 | 285 | 0.4154 | 0.9259 | 0.9259 | 0.7053 | 0.7053 | 0.6665 | 0.6665 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.1577 | 20.0 | 300 | 0.3970 | 0.9051 | 0.9051 | 0.7163 | 0.7163 | 0.6813 | 0.6813 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.1597 | 21.0 | 315 | 0.4199 | 0.9309 | 0.9309 | 0.7270 | 0.7270 | 0.6629 | 0.6629 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.1145 | 22.0 | 330 | 0.4250 | 0.9365 | 0.9365 | 0.6971 | 0.6971 | 0.6588 | 0.6588 | 0.8261 | 0.0 | 0.5 | 0.6594 | nan | | 0.1349 | 23.0 | 345 | 0.4168 | 0.9275 | 0.9275 | 0.7126 | 0.7126 | 0.6654 | 0.6654 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.1481 | 24.0 | 360 | 0.4421 | 0.9552 | 0.9552 | 0.7441 | 0.7441 | 0.6451 | 0.6451 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.1188 | 25.0 | 375 | 0.4356 | 0.9481 | 0.9481 | 0.7444 | 0.7444 | 0.6503 | 0.6503 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.1119 | 26.0 | 390 | 0.4456 | 0.9590 | 0.9590 | 0.7139 | 0.7139 | 0.6422 | 0.6422 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.1282 | 27.0 | 405 | 0.4456 | 0.9589 | 0.9589 | 0.7637 | 0.7637 | 0.6423 | 0.6423 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.142 | 28.0 | 420 | 0.4501 | 0.9637 | 0.9637 | 0.7146 | 0.7146 | 0.6387 | 0.6387 | 0.8261 | 0.0 | 0.5 | 0.6594 | nan | | 0.126 | 29.0 | 435 | 0.4442 | 0.9575 | 0.9575 | 0.7189 | 0.7189 | 0.6433 | 0.6433 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | | 0.1308 | 30.0 | 450 | 0.4439 | 0.9571 | 0.9571 | 0.7260 | 0.7260 | 0.6437 | 0.6437 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
huggingtweets/independentmlt-maltatoday-thetimesofmalta
huggingtweets
2022-03-15T22:00:58Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-15T21:42:12Z
--- language: en thumbnail: http://www.huggingtweets.com/independentmlt-maltatoday-thetimesofmalta/1647381547913/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1333858206012084227/XP6EKW-K_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1419612859244457987/Ph3kXUL3_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1338811551994826752/XQnrubON_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐Ÿค– AI CYBORG ๐Ÿค–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">MaltaToday & Times of Malta & The Malta Independent</div> <div style="text-align: center; font-size: 14px;">@independentmlt-maltatoday-thetimesofmalta</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from MaltaToday & Times of Malta & The Malta Independent. | Data | MaltaToday | Times of Malta | The Malta Independent | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3250 | 3250 | | Retweets | 1 | 0 | 5 | | Short tweets | 3 | 0 | 1 | | Tweets kept | 3246 | 3250 | 3244 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2z9a8ves/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @independentmlt-maltatoday-thetimesofmalta's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/117uvo5a) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/117uvo5a/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/independentmlt-maltatoday-thetimesofmalta') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/maltatoday-netnewsmalta-one_news_malta
huggingtweets
2022-03-15T21:21:32Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-15T21:18:16Z
--- language: en thumbnail: http://www.huggingtweets.com/maltatoday-netnewsmalta-one_news_malta/1647379141053/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1442160889596026883/gq6jcObz_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1047423145077030912/0B4-Tgba_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1333858206012084227/XP6EKW-K_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐Ÿค– AI CYBORG ๐Ÿค–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ONE news & NETnews & MaltaToday</div> <div style="text-align: center; font-size: 14px;">@maltatoday-netnewsmalta-one_news_malta</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ONE news & NETnews & MaltaToday. | Data | ONE news | NETnews | MaltaToday | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3250 | 3250 | | Retweets | 0 | 0 | 1 | | Short tweets | 17 | 1 | 3 | | Tweets kept | 3233 | 3249 | 3246 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lme9vpn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @maltatoday-netnewsmalta-one_news_malta's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zkwd2sgh) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zkwd2sgh/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/maltatoday-netnewsmalta-one_news_malta') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/theshiftnews
huggingtweets
2022-03-15T20:56:54Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-15T20:56:05Z
--- language: en thumbnail: http://www.huggingtweets.com/theshiftnews/1647377809961/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1318831968352612352/blMpdUu4_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐Ÿค– AI BOT ๐Ÿค–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">The Shift News</div> <div style="text-align: center; font-size: 14px;">@theshiftnews</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from The Shift News. | Data | The Shift News | | --- | --- | | Tweets downloaded | 3216 | | Retweets | 446 | | Short tweets | 43 | | Tweets kept | 2727 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1k4siv5q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theshiftnews's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2cedhhrz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2cedhhrz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/theshiftnews') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/hampshireomen
huggingtweets
2022-03-15T20:52:01Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/hampshireomen/1647377480803/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1111434706745069575/7L1hshMt_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐Ÿค– AI BOT ๐Ÿค–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">the omen is cringe tbh</div> <div style="text-align: center; font-size: 14px;">@hampshireomen</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from the omen is cringe tbh. | Data | the omen is cringe tbh | | --- | --- | | Tweets downloaded | 1462 | | Retweets | 68 | | Short tweets | 109 | | Tweets kept | 1285 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1792rc86/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hampshireomen's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1y440us5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1y440us5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hampshireomen') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Rustem/distilroberta-base-trainedmodel
Rustem
2022-03-15T19:32:36Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-15T19:28:05Z
--- license: apache-2.0 ---
Ebtihal/AraBertMo_base_V10
Ebtihal
2022-03-15T19:10:54Z
4
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-04T19:18:16Z
Arabic Model AraBertMo_base_V10 --- language: ar tags: Fill-Mask datasets: OSCAR widget: - text: " ุงู„ุณู„ุงู… ุนู„ูŠูƒู… ูˆุฑุญู…ุฉ[MASK] ูˆุจุฑูƒุงุชุฉ" - text: " ุงู‡ู„ุง ูˆุณู‡ู„ุง ุจูƒู… ููŠ [MASK] ู…ู† ุณูŠุฑุจุญ ุงู„ู…ู„ูŠูˆู†" - text: " ู…ุฑุญุจุง ุจูƒ ุนุฒูŠุฒูŠ ุงู„ุฒุงุฆุฑ [MASK] ู…ูˆู‚ุนู†ุง " --- # Arabic BERT Model **AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats. ## Pretraining Corpus `AraBertMo_base_V10' model was pre-trained on ~3 million words: - [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar". ## Training results this model achieves the following results: | Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss| |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:| | Fill-Mask| 30024| 10 | 64 | 4700 | 9h 13m 43s | 7.2395 | ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V10") model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V10") ``` ## This model was built for master's degree research in an organization: - [University of kufa](https://uokufa.edu.iq/). - [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/). - **Department of Computer Science**
DrishtiSharma/poem-gen-t5-small
DrishtiSharma
2022-03-15T18:50:42Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-15T15:08:11Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: poem-gen-t5-small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # poem-gen-t5-small This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1066 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.67 | 0.32 | 5000 | 3.4705 | | 3.573 | 0.63 | 10000 | 3.3747 | | 3.5075 | 0.95 | 15000 | 3.3154 | | 3.4486 | 1.26 | 20000 | 3.2704 | | 3.4207 | 1.58 | 25000 | 3.2351 | | 3.3933 | 1.89 | 30000 | 3.2069 | | 3.3612 | 2.21 | 35000 | 3.1853 | | 3.34 | 2.53 | 40000 | 3.1659 | | 3.3422 | 2.84 | 45000 | 3.1503 | | 3.3034 | 3.16 | 50000 | 3.1376 | | 3.2886 | 3.47 | 55000 | 3.1283 | | 3.2806 | 3.79 | 60000 | 3.1208 | | 3.2745 | 4.1 | 65000 | 3.1141 | | 3.2894 | 4.42 | 70000 | 3.1093 | | 3.264 | 4.74 | 75000 | 3.1075 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
spasis/marian-finetuned-kde4-en-to-fr
spasis
2022-03-15T17:39:40Z
5
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "tanslation", "generated_from_trainer", "dataset:kde4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-15T15:14:38Z
--- license: apache-2.0 tags: - tanslation - generated_from_trainer datasets: - kde4 model-index: - name: marian-finetuned-kde4-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
DrishtiSharma/wav2vec2-base-finetuned-ks
DrishtiSharma
2022-03-15T17:32:51Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-15T14:04:38Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - superb metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-ks results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0817 - Accuracy: 0.9844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6386 | 1.0 | 399 | 0.5305 | 0.9601 | | 0.2358 | 2.0 | 798 | 0.1774 | 0.9747 | | 0.1982 | 3.0 | 1197 | 0.1172 | 0.9794 | | 0.1554 | 4.0 | 1596 | 0.0884 | 0.9835 | | 0.1261 | 5.0 | 1995 | 0.0817 | 0.9844 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
smartiros/BERT_for_sentiment_5k_2pcs_sampled_airlines_tweets
smartiros
2022-03-15T16:27:13Z
3
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-15T16:26:59Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: tmpny35efxx results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tmpny35efxx This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1996 - Train Accuracy: 0.9348 - Validation Loss: 0.8523 - Validation Accuracy: 0.7633 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.5865 | 0.7626 | 0.5505 | 0.8010 | 0 | | 0.1996 | 0.9348 | 0.8523 | 0.7633 | 1 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.6.0 - Tokenizers 0.11.6
mfleck/wav2vec2-large-xls-r-300m-slowenian-with-lm
mfleck
2022-03-15T16:15:30Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-15T15:01:45Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-slowenian-with-lm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-slowenian-with-lm This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3935 - Wer: 0.3480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 7.9937 | 2.5 | 100 | 3.1565 | 1.0 | | 3.0466 | 5.0 | 200 | 3.0009 | 0.9992 | | 2.9708 | 7.5 | 300 | 2.9494 | 0.9992 | | 2.0519 | 10.0 | 400 | 0.8874 | 0.7290 | | 0.5773 | 12.5 | 500 | 0.5258 | 0.5037 | | 0.3427 | 15.0 | 600 | 0.4767 | 0.4649 | | 0.2612 | 17.5 | 700 | 0.4549 | 0.4209 | | 0.212 | 20.0 | 800 | 0.4294 | 0.3860 | | 0.1748 | 22.5 | 900 | 0.4085 | 0.3769 | | 0.1587 | 25.0 | 1000 | 0.4017 | 0.3673 | | 0.1435 | 27.5 | 1100 | 0.3927 | 0.3538 | | 0.1314 | 30.0 | 1200 | 0.3935 | 0.3480 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
public-data/StyleSwin
public-data
2022-03-15T14:39:14Z
0
0
null
[ "region:us" ]
null
2022-03-15T14:29:57Z
# StyleSwin - Repo: https://github.com/microsoft/StyleSwin - https://drive.google.com/file/d/1OjYZ1zEWGNdiv0RFKv7KhXRmYko72LjO/view?usp=sharing - https://drive.google.com/file/d/1HF0wFNuz1WFrqGEbPhOXjL4QrY05Zu_m/view?usp=sharing - https://drive.google.com/file/d/1YtIJOgLFfkaMI_KL2gBQNABFb1cwOzvM/view?usp=sharing - https://drive.google.com/file/d/17-ILwzLBoHq4HTdAPeaCug7iBvxKWkvp/view?usp=sharing - https://drive.google.com/file/d/1y3wkykjvCbteTaGTRF8EedkG-N1Z8jFf/view?usp=sharing
clips/contact
clips
2022-03-15T12:57:53Z
22
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "arxiv:2203.07362", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
# CoNTACT ### Model description <u>Co</u>ntextual <u>N</u>eural <u>T</u>ransformer <u>A</u>dapted to <u>C</u>OVID-19 <u>T</u>weets or **CoNTACT** is a Dutch RobBERT model (```pdelobelle/robbert-v2-dutch-base```) adapted to the domain of COVID-19 tweets. The model was developed at [CLiPS](https://www.uantwerpen.be/en/research-groups/clips/) by Jens Lemmens, Jens Van Nooten, Tim Kreutz and Walter Daelemans. A full description of the model, the data that was used and the experiments that were conducted can be found in this ArXiv preprint: https://arxiv.org/abs/2203.07362 ### Intended use The model was developed with the intention of achieving high results on NLP tasks involving Dutch social media messages related to COVID-19. ### How to use CoNTACT should be fine-tuned on a downstream task. This can be achieved by referring to ```clips/contact``` in the ```--model_name_or_path``` argument in Huggingface/Transformers' example scripts, or by loading CoNTACT (as shown below) and fine-tuning it using your own code: ``` from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained('clips/contact') tokenizer = AutoTokenizer.from_pretrained('clips/contact') ... ``` ### Training data CoNTACT was trained on 2.8M Dutch tweets related to COVID-19 that were posted in 2021. ### Training Procedure The model's pre-training phase was extended by performing Masked Language Modeling (MLM) on the training data described above. This was done for 4 epochs, using the largest possible batch size that fit working memory (32). ### Evaluation The model was evaluated on two tasks using data from two social media platforms: Twitter and Facebook. Task 1 involved the binary classification of COVID-19 vaccine stance (hesitant vs. not hesitant), whereas task 2 consisted of the mulilabel, multiclass classification of arguments for vaccine hesitancy. CoNTACT outperformed out-of-the-box RobBERT in virtually all our experiments, and with statistical significance in most cases. ### How to cite ``` @misc{lemmens2022contact, title={CoNTACT: A Dutch COVID-19 Adapted BERT for Vaccine Hesitancy and Argumentation Detection}, author={Jens Lemmens and Jens Van Nooten and Tim Kreutz and Walter Daelemans}, year={2022}, eprint={2203.07362}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
mansidw/finetuning-sentiment-model-12000-samples
mansidw
2022-03-15T09:40:05Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:ag_news", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-14T19:40:20Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - ag_news model-index: - name: results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-12000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
Cedille/fr-boris
Cedille
2022-03-15T08:36:54Z
2,990
39
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "causal-lm", "fr", "dataset:c4", "arxiv:2202.03371", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: fr license: mit tags: - pytorch - causal-lm datasets: - c4 --- # Cedille AI Cedille is a project to bring large language models to non-English languages. ## fr-boris Boris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) codebase. Boris was trained on around 78B tokens of French text from the [C4](https://huggingface.co/datasets/c4) dataset. We started training from GPT-J, which has been trained on [The Pile](https://pile.eleuther.ai/). As a consequence the model still has good performance in English language. Boris makes use of the unmodified GPT-2 tokenizer. Boris is named after the great French writer [Boris Vian](https://en.wikipedia.org/wiki/Boris_Vian). # How do I test Cedille? For the time being, the easiest way to test the model is to use our [publicly accessible playground](https://en.cedille.ai/). Cedille is a relatively large model and running it in production can get expensive. Consider contacting us for API access at hello@cedille.ai. ## ๐Ÿ“Š Cedille paper Our paper is out now! https://arxiv.org/abs/2202.03371 Thanks for citing our work if you make use of Cedille ```bibtex @misc{muller2022cedille, title={Cedille: A large autoregressive French language model}, author={Martin M{\"{u}}ller and Florian Laurent}, year={2022}, eprint={2202.03371}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Contact us For any custom development please contact us at hello@cedille.ai. ## Links * [Official website](https://en.cedille.ai/) * [Blog](https://en.cedille.ai/blog) * [GitHub](https://github.com/coteries/cedille-ai) * [Twitter](https://twitter.com/CedilleAI)
mjc00/distilbert-base-uncased-finetuned-emotion
mjc00
2022-03-15T05:48:00Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-15T05:23:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.924132235882821 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2153 - Accuracy: 0.924 - F1: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7986 | 1.0 | 250 | 0.3021 | 0.91 | 0.9078 | | 0.2386 | 2.0 | 500 | 0.2153 | 0.924 | 0.9241 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_English
StivenLancheros
2022-03-14T23:42:29Z
3
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-14T22:56:59Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_English results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-base-cased-v1.2-finetuned-ner-CRAFT_English This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1614 - Precision: 0.8585 - Recall: 0.8623 - F1: 0.8604 - Accuracy: 0.9724 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0725 | 1.0 | 1360 | 0.1242 | 0.8090 | 0.8698 | 0.8383 | 0.9681 | | 0.0281 | 2.0 | 2720 | 0.1541 | 0.8497 | 0.8549 | 0.8523 | 0.9705 | | 0.0162 | 3.0 | 4080 | 0.1510 | 0.8390 | 0.8681 | 0.8533 | 0.9711 | | 0.0053 | 4.0 | 5440 | 0.1614 | 0.8585 | 0.8623 | 0.8604 | 0.9724 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
peterhsu/codeparrot-ds
peterhsu
2022-03-14T23:00:48Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-14T15:52:25Z
--- license: mit tags: - generated_from_trainer model-index: - name: codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4939 | 0.93 | 5000 | 1.9729 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
sanchit-gandhi/wav2vec2-2-bart-large-no-adapter
sanchit-gandhi
2022-03-14T21:45:57Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-14T12:33:35Z
--- tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model was trained from scratch on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 5.6120 - Wer: 1.0267 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.7189 | 0.56 | 500 | 6.9796 | 0.9350 | | 6.5068 | 1.12 | 1000 | 6.4823 | 1.3923 | | 6.4601 | 1.68 | 1500 | 6.1801 | 1.1578 | | 6.1802 | 2.24 | 2000 | 6.0002 | 1.7750 | | 6.0888 | 2.8 | 2500 | 5.8453 | 1.7581 | | 6.0993 | 3.36 | 3000 | 5.7702 | 1.4096 | | 6.0851 | 3.92 | 3500 | 5.6634 | 1.0944 | | 5.9357 | 4.48 | 4000 | 5.6120 | 1.0267 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
pyf98/librispeech_conformer_hop_length160
pyf98
2022-03-14T18:24:04Z
9
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-14T18:16:15Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - librispeech license: cc-by-4.0 --- ## ESPnet2 ASR model ### `pyf98/librispeech_conformer_hop_length160` This model was trained by Yifan Peng using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 33edd1fc077f6a35e8cb0a59f208cb4564aa4cfb pip install -e . cd egs2/librispeech/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model pyf98/librispeech_conformer_hop_length160 ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Mon Mar 14 12:26:10 EDT 2022` - python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]` - espnet version: `espnet 0.10.7a1` - pytorch version: `pytorch 1.10.1` - Git hash: `467660021998c416ac366aed0f75f3399e321a3a` - Commit date: `Sun Mar 13 17:08:56 2022 -0400` ## asr_train_asr_conformer10_hop_length160_raw_en_bpe5000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |beam60_ctc0.3/dev_clean|2703|54402|98.1|1.7|0.2|0.2|2.1|27.7| |beam60_ctc0.3/dev_other|2864|50948|95.3|4.3|0.4|0.5|5.2|44.1| |beam60_ctc0.3/test_clean|2620|52576|97.9|1.9|0.2|0.3|2.4|27.9| |beam60_ctc0.3/test_other|2939|52343|95.4|4.1|0.4|0.6|5.2|44.8| |beam60_ctc0.3_lm0.6/dev_clean|2703|54402|98.4|1.4|0.2|0.2|1.8|23.3| |beam60_ctc0.3_lm0.6/dev_other|2864|50948|96.4|3.2|0.4|0.4|3.9|36.2| |beam60_ctc0.3_lm0.6/test_clean|2620|52576|98.3|1.5|0.2|0.2|2.0|23.7| |beam60_ctc0.3_lm0.6/test_other|2939|52343|96.2|3.3|0.4|0.5|4.2|39.6| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |beam60_ctc0.3/dev_clean|2703|288456|99.5|0.3|0.2|0.2|0.7|27.7| |beam60_ctc0.3/dev_other|2864|265951|98.4|1.0|0.6|0.6|2.2|44.1| |beam60_ctc0.3/test_clean|2620|281530|99.4|0.3|0.3|0.2|0.8|27.9| |beam60_ctc0.3/test_other|2939|272758|98.5|0.9|0.7|0.6|2.1|44.8| |beam60_ctc0.3_lm0.6/dev_clean|2703|288456|99.5|0.2|0.2|0.2|0.6|23.3| |beam60_ctc0.3_lm0.6/dev_other|2864|265951|98.5|0.8|0.6|0.5|1.9|36.2| |beam60_ctc0.3_lm0.6/test_clean|2620|281530|99.5|0.2|0.3|0.2|0.7|23.7| |beam60_ctc0.3_lm0.6/test_other|2939|272758|98.6|0.7|0.7|0.5|1.9|39.6| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |beam60_ctc0.3/dev_clean|2703|68010|97.6|1.7|0.6|0.4|2.7|27.7| |beam60_ctc0.3/dev_other|2864|63110|94.2|4.3|1.5|0.9|6.7|44.1| |beam60_ctc0.3/test_clean|2620|65818|97.4|1.8|0.8|0.4|3.0|27.9| |beam60_ctc0.3/test_other|2939|65101|94.4|3.9|1.7|0.8|6.4|44.8| |beam60_ctc0.3_lm0.6/dev_clean|2703|68010|98.0|1.4|0.6|0.3|2.3|23.3| |beam60_ctc0.3_lm0.6/dev_other|2864|63110|95.2|3.4|1.4|0.6|5.5|36.2| |beam60_ctc0.3_lm0.6/test_clean|2620|65818|97.8|1.4|0.8|0.3|2.5|23.7| |beam60_ctc0.3_lm0.6/test_other|2939|65101|95.1|3.2|1.7|0.6|5.5|39.6| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer10_hop_length160.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer10_hop_length160_raw_en_bpe5000_sp ngpu: 1 seed: 0 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 51595 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 50 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 4 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 35000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape - exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_960_sp/wav.scp - speech - sound - - dump/raw/train_960_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - sound - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0025 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 40000 token_list: - <blank> - <unk> - โ–THE - S - โ–AND - โ–OF - โ–TO - โ–A - โ–IN - โ–I - โ–HE - โ–THAT - โ–WAS - ED - โ–IT - '''' - โ–HIS - ING - โ–YOU - โ–WITH - โ–FOR - โ–HAD - T - โ–AS - โ–HER - โ–IS - โ–BE - โ–BUT - โ–NOT - โ–SHE - D - โ–AT - โ–ON - LY - โ–HIM - โ–THEY - โ–ALL - โ–HAVE - โ–BY - โ–SO - โ–THIS - โ–MY - โ–WHICH - โ–ME - โ–SAID - โ–FROM - โ–ONE - Y - E - โ–WERE - โ–WE - โ–NO - N - โ–THERE - โ–OR - ER - โ–AN - โ–WHEN - โ–ARE - โ–THEIR - โ–WOULD - โ–IF - โ–WHAT - โ–THEM - โ–WHO - โ–OUT - M - โ–DO - โ–WILL - โ–UP - โ–BEEN - P - R - โ–MAN - โ–THEN - โ–COULD - โ–MORE - C - โ–INTO - โ–NOW - โ–VERY - โ–YOUR - โ–SOME - โ–LITTLE - ES - โ–TIME - RE - โ–CAN - โ–LIKE - LL - โ–ABOUT - โ–HAS - โ–THAN - โ–DID - โ–UPON - โ–OVER - IN - โ–ANY - โ–WELL - โ–ONLY - B - โ–SEE - โ–GOOD - โ–OTHER - โ–TWO - L - โ–KNOW - โ–GO - โ–DOWN - โ–BEFORE - A - AL - โ–OUR - โ–OLD - โ–SHOULD - โ–MADE - โ–AFTER - โ–GREAT - โ–DAY - โ–MUST - โ–COME - โ–HOW - โ–SUCH - โ–CAME - LE - โ–WHERE - โ–US - โ–NEVER - โ–THESE - โ–MUCH - โ–DE - โ–MISTER - โ–WAY - G - โ–S - โ–MAY - ATION - โ–LONG - OR - โ–AM - โ–FIRST - โ–BACK - โ–OWN - โ–RE - โ–AGAIN - โ–SAY - โ–MEN - โ–WENT - โ–HIMSELF - โ–HERE - NESS - โ–THINK - V - IC - โ–EVEN - โ–THOUGHT - โ–HAND - โ–JUST - โ–O - โ–UN - VE - ION - โ–ITS - 'ON' - โ–MAKE - โ–MIGHT - โ–TOO - K - โ–AWAY - โ–LIFE - TH - โ–WITHOUT - ST - โ–THROUGH - โ–MOST - โ–TAKE - โ–DON - โ–EVERY - F - O - โ–SHALL - โ–THOSE - โ–EYES - AR - โ–STILL - โ–LAST - โ–HOUSE - โ–HEAD - ABLE - โ–NOTHING - โ–NIGHT - ITY - โ–LET - โ–MANY - โ–OFF - โ–BEING - โ–FOUND - โ–WHILE - EN - โ–SAW - โ–GET - โ–PEOPLE - โ–FACE - โ–YOUNG - CH - โ–UNDER - โ–ONCE - โ–TELL - AN - โ–THREE - โ–PLACE - โ–ROOM - โ–YET - โ–SAME - IL - US - U - โ–FATHER - โ–RIGHT - EL - โ–THOUGH - โ–ANOTHER - LI - RI - โ–HEART - IT - โ–PUT - โ–TOOK - โ–GIVE - โ–EVER - โ–E - โ–PART - โ–WORK - ERS - โ–LOOK - โ–NEW - โ–KING - โ–MISSUS - โ–SIR - โ–LOVE - โ–MIND - โ–LOOKED - W - RY - โ–ASKED - โ–LEFT - ET - โ–LIGHT - CK - โ–DOOR - โ–MOMENT - RO - โ–WORLD - โ–THINGS - โ–HOME - UL - โ–THING - LA - โ–WHY - โ–MOTHER - โ–ALWAYS - โ–FAR - FUL - โ–WATER - CE - IVE - UR - โ–HEARD - โ–SOMETHING - โ–SEEMED - I - LO - โ–BECAUSE - OL - โ–END - โ–TOLD - โ–CON - โ–YES - โ–GOING - โ–GOT - RA - IR - โ–WOMAN - โ–GOD - EST - TED - โ–FIND - โ–KNEW - โ–SOON - โ–EACH - โ–SIDE - H - TON - MENT - โ–OH - NE - Z - LING - โ–AGAINST - TER - โ–NAME - โ–MISS - โ–QUITE - โ–WANT - โ–YEARS - โ–FEW - โ–BETTER - ENT - โ–HALF - โ–DONE - โ–ALSO - โ–BEGAN - โ–HAVING - โ–ENOUGH - IS - โ–LADY - โ–WHOLE - LESS - โ–BOTH - โ–SEEN - โ–SET - โ–WHITE - โ–COURSE - IES - โ–VOICE - โ–CALLED - โ–D - โ–EX - ATE - โ–TURNED - โ–GAVE - โ–C - โ–POOR - MAN - UT - NA - โ–DEAR - ISH - โ–GIRL - โ–MORNING - โ–BETWEEN - LED - โ–NOR - IA - โ–AMONG - MA - โ– - โ–SMALL - โ–REST - โ–WHOM - โ–FELT - โ–HANDS - โ–MYSELF - โ–HIGH - โ–M - โ–HOWEVER - โ–HERSELF - โ–P - CO - โ–STOOD - ID - โ–KIND - โ–HUNDRED - AS - โ–ROUND - โ–ALMOST - TY - โ–SINCE - โ–G - AM - โ–LA - SE - โ–BOY - โ–MA - โ–PERHAPS - โ–WORDS - ATED - โ–HO - X - โ–MO - โ–SAT - โ–REPLIED - โ–FOUR - โ–ANYTHING - โ–TILL - โ–UNTIL - โ–BLACK - TION - โ–CRIED - RU - TE - โ–FACT - โ–HELP - โ–NEXT - โ–LOOKING - โ–DOES - โ–FRIEND - โ–LAY - ANCE - โ–POWER - โ–BROUGHT - VER - โ–FIRE - โ–KEEP - PO - FF - โ–COUNTRY - โ–SEA - โ–WORD - โ–CAR - โ–DAYS - โ–TOGETHER - โ–IMP - โ–REASON - KE - โ–INDEED - TING - โ–MATTER - โ–FULL - โ–TEN - TIC - โ–LAND - โ–RATHER - โ–AIR - โ–HOPE - โ–DA - โ–OPEN - โ–FEET - โ–EN - โ–FIVE - โ–POINT - โ–CO - OM - โ–LARGE - โ–B - โ–CL - ME - โ–GONE - โ–CHILD - INE - GG - โ–BEST - โ–DIS - UM - โ–HARD - โ–LORD - OUS - โ–WIFE - โ–SURE - โ–FORM - DE - โ–DEATH - ANT - โ–NATURE - โ–BA - โ–CARE - โ–BELIEVE - PP - โ–NEAR - โ–RO - โ–RED - โ–WAR - IE - โ–SPEAK - โ–FEAR - โ–CASE - โ–TAKEN - โ–ALONG - โ–CANNOT - โ–HEAR - โ–THEMSELVES - CI - โ–PRESENT - AD - โ–MASTER - โ–SON - โ–THUS - โ–LI - โ–LESS - โ–SUN - โ–TRUE - IM - IOUS - โ–THOUSAND - โ–MONEY - โ–W - โ–BEHIND - โ–CHILDREN - โ–DOCTOR - AC - โ–TWENTY - โ–WISH - โ–SOUND - โ–WHOSE - โ–LEAVE - โ–ANSWERED - โ–THOU - โ–DUR - โ–HA - โ–CERTAIN - โ–PO - โ–PASSED - GE - TO - โ–ARM - โ–LO - โ–STATE - โ–ALONE - TA - โ–SHOW - โ–NEED - โ–LIVE - ND - โ–DEAD - ENCE - โ–STRONG - โ–PRE - โ–TI - โ–GROUND - SH - TI - โ–SHORT - IAN - UN - โ–PRO - โ–HORSE - MI - โ–PRINCE - ARD - โ–FELL - โ–ORDER - โ–CALL - AT - โ–GIVEN - โ–DARK - โ–THEREFORE - โ–CLOSE - โ–BODY - โ–OTHERS - โ–SENT - โ–SECOND - โ–OFTEN - โ–CA - โ–MANNER - MO - NI - โ–BRING - โ–QUESTION - โ–HOUR - โ–BO - AGE - โ–ST - โ–TURN - โ–TABLE - โ–GENERAL - โ–EARTH - โ–BED - โ–REALLY - โ–SIX - 'NO' - IST - โ–BECOME - โ–USE - โ–READ - โ–SE - โ–VI - โ–COMING - โ–EVERYTHING - โ–EM - โ–ABOVE - โ–EVENING - โ–BEAUTIFUL - โ–FEEL - โ–RAN - โ–LEAST - โ–LAW - โ–ALREADY - โ–MEAN - โ–ROSE - WARD - โ–ITSELF - โ–SOUL - โ–SUDDENLY - โ–AROUND - RED - โ–ANSWER - ICAL - โ–RA - โ–WIND - โ–FINE - โ–WON - โ–WHETHER - โ–KNOWN - BER - NG - โ–TA - โ–CAPTAIN - โ–EYE - โ–PERSON - โ–WOMEN - โ–SORT - โ–ASK - โ–BROTHER - โ–USED - โ–HELD - โ–BIG - โ–RETURNED - โ–STRANGE - โ–BU - โ–PER - โ–FREE - โ–EITHER - โ–WITHIN - โ–DOUBT - โ–YEAR - โ–CLEAR - โ–SIGHT - โ–GRA - โ–LOST - โ–KEPT - โ–F - PE - โ–BAR - โ–TOWN - โ–SLEEP - ARY - โ–HAIR - โ–FRIENDS - โ–DREAM - โ–FELLOW - PER - โ–DEEP - QUE - โ–BECAME - โ–REAL - โ–PAST - โ–MAKING - RING - โ–COMP - โ–ACT - โ–BAD - HO - STER - โ–YE - โ–MEANS - โ–RUN - MEN - โ–DAUGHTER - โ–SENSE - โ–CITY - โ–SOMETIMES - โ–TOWARDS - โ–ROAD - โ–SP - โ–LU - โ–READY - โ–FOOT - โ–COLD - โ–SA - โ–LETTER - โ–ELSE - โ–MAR - โ–STA - BE - โ–TRUTH - โ–LE - BO - โ–BUSINESS - CHE - โ–JOHN - โ–SUBJECT - โ–COURT - โ–IDEA - ILY - โ–RIVER - ATING - โ–FAMILY - HE - โ–DIDN - โ–GLAD - โ–SEVERAL - IAL - โ–UNDERSTAND - โ–SC - โ–POSSIBLE - โ–DIFFERENT - โ–RETURN - โ–ARMS - โ–LOW - โ–HOLD - โ–TALK - โ–RU - โ–WINDOW - โ–INTEREST - โ–SISTER - SON - โ–SH - โ–BLOOD - โ–SAYS - โ–CAP - โ–DI - โ–HUMAN - โ–CAUSE - NCE - โ–THANK - โ–LATE - GO - โ–CUT - โ–ACROSS - โ–STORY - NT - โ–COUNT - โ–ABLE - DY - LEY - โ–NUMBER - โ–STAND - โ–CHURCH - โ–THY - โ–SUPPOSE - LES - BLE - OP - โ–EFFECT - BY - โ–K - โ–NA - โ–SPOKE - โ–MET - โ–GREEN - โ–HUSBAND - โ–RESPECT - โ–PA - โ–FOLLOWED - โ–REMEMBER - โ–LONGER - โ–AGE - โ–TAKING - โ–LINE - โ–SEEM - โ–HAPPY - LAND - EM - โ–STAY - โ–PLAY - โ–COMMON - โ–GA - โ–BOOK - โ–TIMES - โ–OBJECT - โ–SEVEN - QUI - DO - UND - โ–FL - โ–PRETTY - โ–FAIR - WAY - โ–WOOD - โ–REACHED - โ–APPEARED - โ–SWEET - โ–FALL - BA - โ–PASS - โ–SIGN - โ–TREE - IONS - โ–GARDEN - โ–ILL - โ–ART - โ–REMAIN - โ–OPENED - โ–BRIGHT - โ–STREET - โ–TROUBLE - โ–PAIN - โ–CONTINUED - โ–SCHOOL - OUR - โ–CARRIED - โ–SAYING - HA - โ–CHANGE - โ–FOLLOW - โ–GOLD - โ–SW - โ–FEELING - โ–COMMAND - โ–BEAR - โ–CERTAINLY - โ–BLUE - โ–NE - CA - โ–WILD - โ–ACCOUNT - โ–OUGHT - UD - โ–T - โ–BREATH - โ–WANTED - โ–RI - โ–HEAVEN - โ–PURPOSE - โ–CHARACTER - โ–RICH - โ–PE - โ–DRESS - OS - FA - โ–TH - โ–ENGLISH - โ–CHANCE - โ–SHIP - โ–VIEW - โ–TOWARD - AK - โ–JOY - โ–JA - โ–HAR - โ–NEITHER - โ–FORCE - โ–UNCLE - DER - โ–PLAN - โ–PRINCESS - DI - โ–CHIEF - โ–HAT - โ–LIVED - โ–AB - โ–VISIT - โ–MOR - TEN - โ–WALL - UC - โ–MINE - โ–PLEASURE - โ–SMILE - โ–FRONT - โ–HU - โ–DEAL - OW - โ–FURTHER - GED - โ–TRIED - DA - VA - โ–NONE - โ–ENTERED - โ–QUEEN - โ–PAY - โ–EL - โ–EXCEPT - โ–SHA - โ–FORWARD - โ–EIGHT - โ–ADDED - โ–PUBLIC - โ–EIGHTEEN - โ–STAR - โ–HAPPENED - โ–LED - โ–WALKED - โ–ALTHOUGH - โ–LATER - โ–SPIRIT - โ–WALK - โ–BIT - โ–MEET - LIN - โ–FI - LT - โ–MOUTH - โ–WAIT - โ–HOURS - โ–LIVING - โ–YOURSELF - โ–FAST - โ–CHA - โ–HALL - โ–BEYOND - โ–BOAT - โ–SECRET - ENS - โ–CHAIR - RN - โ–RECEIVED - โ–CAT - RESS - โ–DESIRE - โ–GENTLEMAN - UGH - โ–LAID - EVER - โ–OCCASION - โ–WONDER - โ–GU - โ–PARTY - DEN - โ–FISH - โ–SEND - โ–NEARLY - โ–TRY - CON - โ–SEEMS - RS - โ–BELL - โ–BRA - โ–SILENCE - IG - โ–GUARD - โ–DIE - โ–DOING - โ–TU - โ–COR - โ–EARLY - โ–BANK - โ–FIGURE - IF - โ–ENGLAND - โ–MARY - โ–AFRAID - LER - โ–FO - โ–WATCH - โ–FA - โ–VA - โ–GRE - โ–AUNT - PED - โ–SERVICE - โ–JE - โ–PEN - โ–MINUTES - โ–PAN - โ–TREES - NED - โ–GLASS - โ–TONE - โ–PLEASE - โ–FORTH - โ–CROSS - โ–EXCLAIMED - โ–DREW - โ–EAT - โ–AH - โ–GRAVE - โ–CUR - PA - URE - CENT - โ–MILES - โ–SOFT - โ–AGO - โ–POSITION - โ–WARM - โ–LENGTH - โ–NECESSARY - โ–THINKING - โ–PICTURE - โ–PI - SHIP - IBLE - โ–HEAVY - โ–ATTENTION - โ–DOG - ABLY - โ–STANDING - โ–NATURAL - โ–APPEAR - OV - โ–CAUGHT - VO - ISM - โ–SPRING - โ–EXPERIENCE - โ–PAT - OT - โ–STOPPED - โ–REGARD - โ–HARDLY - โ–SELF - โ–STRENGTH - โ–GREW - โ–KNIGHT - โ–OPINION - โ–WIDE - โ–INSTEAD - โ–SOUTH - โ–TRANS - โ–CORNER - โ–LEARN - โ–ISLAND - โ–MI - โ–THIRD - โ–STE - โ–STRAIGHT - โ–TEA - โ–BOUND - โ–SEEING - โ–JU - โ–DINNER - โ–BEAUTY - โ–PEACE - AH - โ–REP - โ–SILENT - โ–CRE - ALLY - RIC - โ–STEP - โ–VER - โ–JO - GER - โ–SITTING - โ–THIRTY - โ–SAVE - ENED - โ–GLANCE - โ–REACH - โ–ACTION - โ–SAL - โ–SAD - โ–STONE - ITIES - โ–FRENCH - โ–STRUCK - โ–PAPER - โ–WHATEVER - โ–SUB - โ–DISTANCE - โ–WRONG - โ–KNOWLEDGE - โ–SAFE - โ–SNOW - โ–MUSIC - โ–FIFTY - RON - โ–ATTEMPT - โ–GOVERNMENT - TU - โ–CROWD - โ–BESIDES - โ–LOVED - โ–BOX - โ–DIRECTION - โ–TRAIN - โ–NORTH - โ–THICK - โ–GETTING - AV - โ–FLOOR - โ–COMPANY - โ–BLOW - โ–PLAIN - TRO - โ–BESIDE - โ–ROCK - โ–IMMEDIATELY - FI - โ–SHADOW - โ–SIT - ORS - ILE - โ–DRINK - โ–SPOT - โ–DANGER - โ–AL - โ–SAINT - โ–SLOWLY - โ–PALACE - IER - โ–RESULT - โ–PETER - โ–FOREST - โ–BELONG - โ–SU - โ–PAR - RIS - โ–TEARS - โ–APPEARANCE - โ–GATE - BU - ITION - โ–QUICKLY - โ–QUIET - โ–LONDON - โ–START - โ–BROWN - TRA - KIN - โ–CONSIDER - โ–BATTLE - โ–ANNE - โ–PIECE - โ–DIED - โ–SUCCESS - โ–LIPS - โ–FILLED - โ–FORGET - โ–POST - IFIED - โ–MARGARET - โ–FOOD - HAM - โ–PLEASANT - โ–FE - โ–EXPRESSION - โ–POCKET - โ–FRESH - โ–WEAR - TRI - โ–BROKEN - โ–LAUGHED - GING - โ–FOLLOWING - WN - IP - โ–TOUCH - โ–YOUTH - ATIVE - โ–LEG - โ–WEEK - โ–REMAINED - โ–EASY - NER - RK - โ–ENTER - โ–FIGHT - โ–PLACED - โ–TRAVEL - โ–SIMPLE - โ–GIRLS - โ–WAITING - โ–STOP - โ–WAVE - AU - โ–WISE - โ–CAMP - TURE - UB - โ–VE - โ–OFFICE - โ–GRAND - โ–FIT - โ–JUDGE - UP - MENTS - โ–QUICK - HI - โ–FLO - RIES - VAL - โ–COMFORT - โ–PARTICULAR - โ–STARTED - โ–SUIT - โ–NI - โ–PALE - โ–IMPOSSIBLE - โ–HOT - โ–CONVERSATION - โ–SCENE - โ–BOYS - โ–WIN - โ–BRE - โ–SOCIETY - โ–OUTSIDE - โ–WRITE - โ–EFFORT - โ–TALKING - โ–FORTUNE - โ–NINE - โ–WA - โ–SINGLE - โ–RULE - โ–PORT - โ–WINTER - โ–CAST - โ–CRA - โ–HAPPEN - โ–CRO - โ–SHUT - NING - โ–GUN - โ–NOBLE - โ–BEGIN - โ–PATH - โ–SKY - โ–WONDERFUL - โ–SUDDEN - โ–ARMY - โ–CHE - โ–WORTH - โ–MOUNTAIN - โ–MIN - AG - โ–FLU - โ–GRACE - โ–CHAPTER - โ–BELOW - โ–RING - โ–TURNING - โ–IRON - โ–TOP - โ–AFTERNOON - ORY - โ–EVIL - โ–TRUST - โ–BOW - โ–TRI - โ–SAIL - โ–CONTENT - โ–HORSES - ITE - โ–SILVER - AP - โ–LAD - โ–RUNNING - โ–HILL - โ–BEGINNING - โ–MAD - โ–HABIT - GRA - โ–CLOTHES - โ–MORROW - โ–CRY - โ–FASHION - โ–PRESENCE - โ–Z - FE - โ–ARRIVED - โ–QUARTER - โ–PERFECT - โ–WO - โ–TRA - โ–USUAL - โ–NECK - โ–MARRIED - โ–SEAT - โ–WI - โ–GAR - โ–SAND - โ–SHORE - โ–GIVING - NY - โ–PROBABLY - โ–MINUTE - โ–EXPECT - โ–DU - โ–SHOT - โ–INSTANT - โ–DEGREE - โ–COLOR - โ–WEST - RT - โ–MARCH - โ–BIRD - โ–SHOWED - โ–GREATER - โ–SERIOUS - โ–CARRY - โ–COVERED - โ–FORMER - โ–LOUD - โ–MOVED - โ–MASS - โ–SEEK - โ–CHO - GEN - โ–ROMAN - IB - โ–MOON - โ–BOARD - โ–STREAM - โ–EASILY - โ–WISHED - โ–SEARCH - โ–COULDN - โ–MONTHS - โ–SICK - LIE - โ–DUTY - โ–TWELVE - โ–FAINT - โ–STRANGER - โ–SURPRISE - โ–KILL - โ–LEAVING - โ–JOURNEY - โ–SCARCELY - โ–RAISED - โ–SPEAKING - โ–TERRIBLE - โ–TOM - โ–FIELD - โ–GAME - โ–QUA - โ–PROMISE - โ–LIE - โ–CONDITION - โ–TRO - โ–PERSONAL - โ–TALL - โ–STICK - โ–THREW - โ–MARRY - โ–VAN - โ–BURN - โ–ACCORDING - โ–RISE - โ–ATTACK - โ–SWORD - โ–GUESS - โ–THOUGHTS - โ–THIN - โ–THROW - โ–CALM - SIDE - โ–VILLAGE - โ–DEN - โ–ANXIOUS - โ–MER - GI - โ–EXPECTED - โ–BALL - โ–ESPECIALLY - โ–CHARGE - โ–MEASURE - ISE - โ–NICE - โ–TRYING - โ–ALLOW - โ–SHARP - โ–BREAD - โ–HONOUR - โ–HONOR - โ–ENTIRELY - โ–BILL - โ–BRI - โ–WRITTEN - โ–AR - โ–BROKE - โ–KILLED - โ–MARK - โ–VEN - โ–LADIES - โ–LEARNED - โ–FLOWERS - PLE - โ–FORTY - โ–OFFER - โ–HAPPINESS - โ–PRAY - โ–CLASS - โ–FER - โ–PRINCIPLE - GU - โ–BOOKS - โ–SHAPE - โ–SUMMER - โ–JACK - โ–DRAW - โ–GOLDEN - โ–DECIDED - โ–LEAD - โ–UNLESS - โ–HARM - โ–LISTEN - HER - โ–SHOOK - โ–INFLUENCE - โ–PERFECTLY - โ–MARRIAGE - โ–BROAD - โ–ESCAPE - โ–STATES - โ–MIDDLE - โ–PLANT - โ–MIL - โ–MOVEMENT - โ–NOISE - โ–ENEMY - โ–HISTORY - โ–BREAK - ROUS - โ–UNDERSTOOD - โ–LATTER - FER - โ–COMES - โ–MERELY - โ–SIMPLY - WI - โ–IMAGINE - โ–LOWER - โ–CONDUCT - โ–BORN - WA - โ–YARD - โ–KA - โ–CLOSED - โ–NOTE - GA - โ–STRA - RAN - โ–EXIST - EV - โ–SPEECH - โ–BITTER - JO - โ–MAKES - โ–GRASS - โ–REPLY - โ–CHANGED - โ–MON - โ–LYING - โ–DANCE - โ–FINALLY - โ–AMERICAN - โ–ENJOY - โ–CONTAIN - โ–MEANT - USE - โ–OBSERVED - THER - โ–LAUGH - โ–AFTERWARDS - โ–BEAT - โ–RACE - โ–EQUAL - โ–RAIN - PS - โ–STEPS - โ–BENEATH - โ–TAIL - โ–TASTE - IO - EY - โ–CHAR - โ–GE - GN - TIN - โ–GROW - โ–TE - IANS - โ–MOVE - โ–REPEATED - โ–DRIVE - TUR - โ–SI - CLOCK - โ–BRAVE - โ–MADAME - โ–LOT - โ–CASTLE - โ–HI - AND - โ–FUTURE - โ–RELATION - โ–SORRY - โ–HEALTH - โ–DICK - โ–R - โ–BUILDING - โ–EDGE - โ–BLESS - โ–SPITE - WE - โ–MIS - โ–PRISONER - โ–ALLOWED - โ–PH - โ–CATCH - MER - ETH - โ–COAT - โ–COMPLETE - โ–WOULDN - โ–CREATURE - โ–YELLOW - โ–IMPORTANT - โ–ADD - โ–PASSING - โ–DARKNESS - โ–CARRIAGE - โ–MILL - โ–FIFTEEN - NCY - โ–HUNG - โ–OB - โ–PLEASED - โ–SPREAD - โ–CURIOUS - โ–WORSE - โ–CIRCUMSTANCES - โ–GI - LAR - โ–CAL - โ–HY - โ–MERE - โ–JANE - โ–EAST - BI - โ–CUP - โ–BLIND - โ–PASSION - โ–DISCOVERED - โ–NOTICE - โ–REPORT - โ–SPACE - โ–PRESENTLY - โ–SORROW - โ–PACK - โ–DIN - CY - โ–DRY - โ–ANCIENT - โ–DRESSED - โ–COVER - โ–VO - โ–EXISTENCE - โ–EXACTLY - โ–BEAST - โ–PROPER - โ–DROPPED - โ–CLEAN - โ–COLOUR - โ–HOST - โ–CHAMBER - โ–FAITH - LET - โ–DETERMINED - โ–PRIEST - โ–STORM - โ–SKIN - โ–DARE - โ–PERSONS - โ–PICK - โ–NARROW - โ–SUPPORT - โ–PRIVATE - โ–SMILED - โ–COUSIN - โ–DRAWING - โ–ATTEND - โ–COOK - โ–PREVENT - โ–VARIOUS - โ–BLA - โ–FIXED - โ–WEAK - THE - โ–HOLE - โ–BOTTOM - โ–NOBODY - ADE - โ–LEGS - ITCH - โ–INDIVIDUAL - โ–EARS - LIKE - โ–ADVANTAGE - โ–FRANCE - โ–BON - โ–WINE - โ–LIVES - OD - โ–WALLS - โ–TIRED - โ–SHOP - โ–ANIMAL - โ–CRU - โ–WROTE - โ–ROYAL - โ–CONSIDERED - โ–MORAL - โ–COMPANION - โ–LOSE - โ–ISN - โ–BAG - โ–LAKE - โ–INTER - โ–COM - โ–LETTERS - โ–LUCK - โ–EAR - โ–GERMAN - โ–PET - โ–SAKE - โ–DROP - โ–PAID - โ–BREAKFAST - โ–LABOR - โ–DESERT - โ–DECLARED - โ–HUM - โ–STUDY - โ–INSTANCE - ONE - โ–SOMEWHAT - โ–CLOTH - โ–SPECIAL - โ–COLONEL - โ–SONG - โ–MAIN - โ–VALUE - โ–PROUD - โ–EXPRESS - โ–NATION - โ–HANDSOME - โ–CONFESS - โ–PU - โ–PASSAGE - โ–PERIOD - โ–CUSTOM - โ–HURT - โ–SHOULDER - โ–CHRIST - ZA - โ–RECEIVE - โ–DIFFICULT - โ–DEPEND - โ–MEETING - โ–CHI - โ–GEN - LIGHT - โ–BELIEVED - โ–SOCIAL - โ–DIFFICULTY - โ–GREATEST - โ–DRAWN - โ–GRANT - โ–BIRDS - โ–ANGRY - โ–HEAT - UFF - โ–DUE - โ–PLACES - โ–SIN - โ–COURAGE - โ–EVIDENTLY - โ–GENTLE - โ–CRUEL - โ–GEORGE - โ–GRI - โ–SERVANT - โ–U - โ–PURE - OOK - โ–KNOWS - โ–KNOWING - LF - โ–WRITING - โ–REMEMBERED - โ–CU - โ–HOLDING - โ–TENDER - โ–QUI - โ–BURST - โ–SURELY - IGN - โ–VALLEY - โ–FU - โ–BUTTER - โ–SPOKEN - โ–STORE - โ–DISC - โ–CHRISTIAN - โ–PARIS - โ–HENRY - โ–FINISHED - โ–PROVE - โ–FOOL - โ–SOLDIERS - โ–LANGUAGE - โ–INSIDE - โ–BAN - โ–FALLEN - ROW - โ–MAL - โ–BABY - โ–SITUATION - โ–WATCHED - ANS - โ–RUIN - โ–GENTLEMEN - โ–FRO - โ–FANCY - โ–ACCEPT - โ–SEASON - โ–OURSELVES - โ–SAN - โ–SPEED - IZED - โ–COOL - โ–SERVE - โ–VESSEL - โ–WILLIAM - โ–OBLIGED - โ–GROUP - FORM - โ–GOES - UOUS - โ–LEAVES - โ–PECULIAR - โ–NEWS - โ–VAIN - โ–EVERYBODY - โ–PIN - UG - โ–FORGOTTEN - โ–FRA - GAN - โ–CAREFULLY - โ–FLASH - UCH - โ–FUR - โ–MURDER - โ–DELIGHT - โ–WAITED - โ–RENDER - โ–PROPERTY - โ–NOTICED - โ–ROLL - โ–KNOCK - โ–EARNEST - KI - โ–HONEST - โ–PROMISED - โ–BAL - AW - โ–WALKING - ANG - โ–SQUARE - โ–QUIETLY - โ–CLOUD - WOOD - โ–FORMED - โ–HIGHER - โ–BUILT - โ–FATE - โ–TEACH - MY - โ–FALSE - โ–YORK - โ–DUST - โ–CLIMB - โ–FOND - โ–GROWN - โ–DESCEND - โ–RAG - โ–FRUIT - โ–GENERALLY - โ–OFFERED - โ–ER - โ–NURSE - POSE - โ–SPENT - โ–JOIN - โ–STATION - โ–MEANING - โ–SMOKE - HOOD - โ–ROUGH - JU - โ–LIKELY - โ–SURFACE - โ–KE - โ–MONTH - โ–POSSESSION - โ–TONGUE - โ–DUKE - โ–NOSE - โ–LAUGHING - โ–WEATHER - โ–WHISPERED - โ–SYSTEM - โ–LAWS - DDLE - โ–TOUCHED - โ–TRADE - LD - โ–SURPRISED - RIN - โ–ARCH - โ–WEALTH - FOR - โ–TEMPER - โ–FRANK - โ–GAL - โ–BARE - โ–OPPORTUNITY - โ–CLAIM - โ–ANIMALS - โ–REV - โ–COST - โ–WASH - ZE - โ–CORN - โ–OPPOSITE - โ–POLICE - โ–IDEAS - LON - โ–KEY - โ–READING - โ–COLLECT - CHED - โ–H - โ–CROWN - โ–TAR - โ–SWIFT - โ–SHOULDERS - โ–ICE - โ–GRAY - โ–SHARE - โ–PREPARED - โ–GRO - โ–UND - โ–TER - โ–EMPTY - CING - โ–SMILING - โ–AVOID - โ–DIFFERENCE - โ–EXPLAIN - โ–POUR - โ–ATTRACT - โ–OPENING - โ–WHEEL - โ–MATERIAL - โ–BREAST - โ–SUFFERING - โ–DISTINCT - โ–BOOT - โ–ROW - โ–FINGERS - HAN - โ–ALTOGETHER - โ–FAT - โ–PAPA - โ–BRAIN - โ–ASLEEP - โ–GREY - โ–SUM - โ–GAS - โ–WINDOWS - โ–ALIVE - โ–PROCEED - โ–FLOWER - โ–LEAP - โ–PUR - โ–PIECES - โ–ALTER - โ–MEMORY - IENT - โ–FILL - โ–CLO - โ–THROWN - โ–KINGDOM - โ–RODE - IUS - โ–MAID - โ–DIM - โ–BAND - โ–VIRTUE - โ–DISH - โ–GUEST - โ–LOSS - โ–CAUSED - โ–MOTION - โ–POT - โ–MILLION - โ–FAULT - โ–LOVELY - โ–HERO - PPING - โ–UNITED - โ–SPI - SOME - BRA - โ–MOUNTAINS - โ–NU - โ–SATISFIED - โ–DOLLARS - โ–LOVER - โ–CONCEAL - โ–VAST - โ–PULL - โ–HATH - โ–RUSH - โ–J - โ–DESPAIR - EX - โ–HEIGHT - โ–CE - โ–BENT - โ–PITY - โ–RISING - ATH - โ–PRIDE - โ–HURRY - KA - โ–SETTLED - โ–JUSTICE - โ–LIFTED - PEN - โ–SOLDIER - โ–FINDING - โ–REMARK - โ–REGULAR - โ–STRUGGLE - โ–MACHINE - โ–SING - โ–HURRIED - โ–SUFFICIENT - โ–REPRESENT - โ–DOUBLE - โ–ALARM - โ–SUPPER - โ–DREADFUL - โ–FORE - ATOR - โ–STOCK - โ–TIN - โ–EXAMPLE - โ–ROOF - โ–FLOW - โ–SUPPOSED - โ–PRESERV - โ–L - โ–LISTENED - OC - โ–STO - โ–SECURE - โ–FRIGHTENED - โ–DISTURB - โ–EMOTION - โ–SERVANTS - โ–YO - โ–BUY - โ–FORCED - โ–KITCHEN - โ–TERROR - โ–STAIRS - โ–SIXTY - KER - โ–ORDINARY - โ–DIRECTLY - โ–HEADS - โ–METHOD - โ–FORGIVE - โ–AWFUL - โ–REFLECT - โ–GREATLY - โ–TALKED - โ–RIDE - STONE - โ–FAVOUR - โ–WELCOME - โ–SEIZED - OU - โ–CONTROL - โ–ORDERED - โ–ANGEL - โ–USUALLY - โ–POET - โ–BOLD - LINE - โ–ADVENTURE - โ–WATCHING - โ–FOLK - โ–MISTRESS - IZE - โ–GROWING - โ–CAVE - โ–EVIDENCE - โ–FINGER - โ–SEVENTEEN - โ–MOVING - EOUS - โ–DOESN - โ–COW - โ–TYPE - โ–BOIL - โ–TALE - โ–DELIVER - โ–FARM - โ–MONSIEUR - โ–GATHERED - โ–FEELINGS - โ–RATE - โ–REMARKED - โ–PUTTING - โ–MAT - โ–CONTRARY - โ–CRIME - โ–PLA - โ–COL - โ–NEARER - TES - โ–CIVIL - โ–SHAME - โ–LOOSE - โ–DISCOVER - โ–FLAT - โ–TWICE - โ–FAIL - VIS - โ–UNC - EA - โ–EUROPE - โ–PATIENT - โ–UNTO - โ–SUFFER - โ–PAIR - โ–TREASURE - OSE - โ–EAGER - โ–FLY - โ–N - โ–VAL - โ–DAN - โ–SALT - โ–BORE - BBE - โ–ARTHUR - โ–AFFAIRS - โ–SLOW - โ–CONSIST - โ–DEVIL - LAN - โ–AFFECTION - โ–ENGAGED - โ–KISS - โ–YA - โ–OFFICER - IFICATION - โ–LAMP - โ–PARTS - HEN - โ–MILK - โ–PROCESS - โ–GIFT - โ–PULLED - โ–HID - โ–RAY - โ–EXCELLENT - โ–IMPRESSION - โ–AUTHORITY - โ–PROVED - โ–TELLING - TTE - โ–TOWER - โ–CONSEQUENCE - โ–FAVOR - โ–FLEW - โ–CHARLES - ISTS - โ–ADDRESS - โ–FAMILIAR - โ–LIMIT - โ–CONFIDENCE - โ–RARE - โ–WEEKS - โ–WOODS - โ–INTENTION - โ–DIRECT - โ–PERFORM - โ–SOLEMN - โ–DISTANT - โ–IMAGE - โ–PRESIDENT - โ–FIRM - โ–INDIAN - โ–RANK - โ–LIKED - โ–AGREE - โ–HOUSES - โ–WIL - โ–MATTERS - โ–PRISON - โ–MODE - โ–MAJOR - โ–WORKING - โ–SLIP - โ–WEIGHT - โ–AWARE - โ–BUSY - โ–LOOKS - โ–WOUND - โ–THOR - โ–BATH - โ–EXERCISE - โ–SIMILAR - โ–WORE - โ–AMOUNT - โ–QUESTIONS - โ–VIOLENT - โ–EXCUSE - โ–ASIDE - โ–TUR - โ–DULL - OF - โ–EMPEROR - โ–NEVERTHELESS - โ–SHOUT - โ–EXPLAINED - โ–SIZE - โ–ACCOMPLISH - FORD - CAN - โ–MISTAKE - โ–INSTANTLY - โ–SMOOTH - โ–STRIKE - โ–BOB - ISED - โ–HORROR - โ–SCIENCE - โ–PROTEST - โ–MANAGE - โ–OBEY - โ–NECESSITY - โ–SPLENDID - โ–PRESS - โ–INTERESTING - โ–RELIGION - โ–UNKNOWN - โ–FIERCE - โ–DISAPPEARED - โ–HOLY - โ–HATE - โ–PLAYED - โ–LIN - โ–NATURALLY - โ–DROVE - โ–LOUIS - TIES - โ–BRAND - INESS - RIE - โ–SHOOT - โ–CONSENT - โ–SEATED - โ–LINES - GUE - โ–AGREED - โ–CIRCLE - โ–STIR - โ–STREETS - โ–TASK - โ–RID - โ–PRODUCED - โ–ACCIDENT - โ–WITNESS - โ–LIBERTY - โ–DETAIL - โ–MINISTER - โ–POWERFUL - โ–SAVAGE - โ–SIXTEEN - โ–PRETEND - โ–COAST - โ–SQU - โ–UTTER - โ–NAMED - โ–CLEVER - โ–ADMIT - โ–COUPLE - โ–WICKED - โ–MESSAGE - โ–TEMPLE - โ–STONES - โ–YESTERDAY - โ–HILLS - DAY - โ–SLIGHT - โ–DIAMOND - โ–POSSIBLY - โ–AFFAIR - โ–ORIGINAL - โ–HEARING - โ–WORTHY - โ–SELL - NEY - ICK - โ–COTTAGE - โ–SACRIFICE - โ–PROGRESS - โ–SHOCK - โ–DESIGN - โ–SOUGHT - โ–PIT - โ–SUNDAY - โ–OTHERWISE - โ–CABIN - โ–PRAYER - โ–DWELL - โ–GAIN - โ–BRIDGE - โ–PARTICULARLY - โ–YIELD - โ–TREAT - RIGHT - โ–OAK - โ–ROPE - WIN - โ–ORDERS - โ–SUSPECT - โ–EDWARD - AB - โ–ELEVEN - โ–TEETH - โ–OCCURRED - DDING - โ–AMERICA - โ–FALLING - โ–LION - โ–DEPART - โ–KEEPING - โ–DEMAND - โ–PAUSED - โ–CEASED - INA - โ–FUN - โ–CHEER - โ–PARDON - โ–NATIVE - LUS - LOW - โ–DOGS - โ–REQUIRED - ILITY - โ–ELECT - โ–ENTERTAIN - ITUDE - โ–HUGE - โ–CARRYING - โ–BLU - โ–INSIST - โ–SATISFACTION - โ–HUNT - โ–COUNTENANCE - โ–UPPER - โ–MAIDEN - โ–FAILED - โ–JAMES - โ–FOREIGN - โ–GATHER - โ–TEST - BOARD - โ–TERMS - โ–SILK - โ–BEG - โ–BROTHERS - โ–PAGE - โ–KNEES - โ–SHOWN - โ–PROFESSOR - โ–MIGHTY - โ–DEFI - โ–CHARM - โ–REQUIRE - โ–LOG - MORE - โ–PROOF - โ–POSSESSED - โ–SOFTLY - โ–UNFORTUNATE - โ–PRICE - โ–SEVERE - โ–SINGING - โ–STAGE - โ–FREEDOM - โ–SHOUTED - โ–FARTHER - โ–MAJESTY - โ–PREVIOUS - โ–GUIDE - โ–MATCH - โ–CHEST - โ–INTENDED - โ–BI - โ–EXCITEMENT - โ–OFFICERS - โ–SUR - โ–SHAKE - โ–SENTIMENT - โ–GENTLY - โ–SUCCEEDED - โ–MENTION - โ–LOCK - โ–ACQUAINTANCE - โ–IMAGINATION - โ–PHYSICAL - โ–LEADING - โ–SLAVE - โ–CART - โ–POINTED - โ–STEAM - โ–SHADE - โ–PIPE - โ–BASE - โ–INVENT - โ–ALAS - โ–WORKED - โ–REGRET - โ–BUR - โ–FAITHFUL - โ–MENTIONED - โ–RECORD - โ–COMPLAIN - โ–SUPERIOR - โ–BAY - โ–PAL - EMENT - UE - โ–SEVENTY - โ–HOTEL - โ–SHEEP - โ–MEAL - โ–ADVICE - โ–HIDDEN - โ–DEMANDED - โ–CONSCIOUS - โ–BROW - โ–POSSESS - โ–FOURTH - โ–EVENTS - โ–FRI - โ–PRAISE - โ–ADVANCED - โ–RESOLVED - โ–STUFF - โ–CHEERFUL - โ–BIRTH - โ–GRIEF - โ–AFFORD - โ–FAIRY - โ–WAKE - โ–SIDES - โ–SUBSTANCE - โ–ARTICLE - โ–LEVEL - โ–MIST - โ–JOINED - โ–PRACTICAL - โ–CLEARLY - โ–TRACE - โ–AWAKE - โ–OBSERVE - โ–BASKET - โ–LACK - VILLE - โ–SPIRITS - โ–EXCITED - โ–ABANDON - โ–SHINING - โ–FULLY - โ–CALLING - โ–CONSIDERABLE - โ–SPRANG - โ–MILE - โ–DOZEN - โ–PEA - โ–DANGEROUS - โ–WIT - โ–JEW - โ–POUNDS - โ–FOX - โ–INFORMATION - โ–LIES - โ–DECK - NNY - โ–PAUL - โ–STARS - โ–ANGER - โ–SETTLE - โ–WILLING - โ–ADAM - โ–FACES - โ–SMITH - โ–IMPORTANCE - โ–STRAIN - WAR - โ–SAM - โ–FEATHER - โ–SERVED - โ–AUTHOR - โ–PERCEIVED - โ–FLAME - โ–DIVINE - โ–TRAIL - โ–ANYBODY - โ–SIGH - โ–DELICATE - KY - โ–FOLD - โ–HAVEN - โ–DESIRED - โ–CURIOSITY - โ–PRACTICE - โ–CONSIDERATION - โ–ABSOLUTELY - โ–CITIZEN - โ–BOTTLE - โ–INTERESTED - โ–MEAT - โ–OCCUPIED - โ–CHOOSE - โ–THROAT - ETTE - โ–CANDLE - โ–DAWN - โ–PROTECT - โ–SENTENCE - IED - โ–ROCKS - โ–PORTION - โ–APPARENTLY - โ–PRESENTED - โ–TIGHT - โ–ACTUALLY - โ–DYING - โ–HAM - โ–DAILY - โ–SUFFERED - โ–POLITICAL - โ–BODIES - โ–MODERN - โ–COMPLETELY - โ–SOONER - TAN - โ–PROP - โ–ADVANCE - โ–REFUSED - โ–FARMER - โ–POLITE - โ–THUNDER - โ–BRIEF - โ–ELSIE - โ–SAILOR - โ–SUGGESTED - โ–PLATE - โ–AID - โ–FLESH - โ–WEEP - โ–BUCK - โ–ANTI - โ–OCEAN - โ–SPEND - WELL - โ–ODD - โ–GOVERNOR - โ–ENTRANCE - โ–SUSPICION - โ–STEPPED - โ–RAPIDLY - โ–CHECK - โ–HIDE - โ–FLIGHT - โ–CLUB - โ–ENTIRE - โ–INDIANS - ASH - โ–CAPITAL - โ–MAMMA - HAR - โ–CORRECT - โ–CRACK - โ–SENSATION - โ–WORST - โ–PACE - โ–MIDST - โ–AUGUST - โ–PROPORTION - โ–INNOCENT - LINESS - โ–REGARDED - โ–DRIVEN - ORD - โ–HASTE - โ–EDUCATION - โ–EMPLOY - โ–TRULY - โ–INSTRUMENT - โ–MAG - โ–FRAME - โ–FOOLISH - โ–TAUGHT - โ–HANG - โ–ARGUMENT - โ–NINETEEN - โ–ELDER - โ–NAY - โ–NEEDED - โ–NEIGHBOR - โ–INSTRUCT - โ–PAPERS - โ–REWARD - โ–EQUALLY - โ–FIELDS - โ–DIG - HIN - โ–CONDITIONS - JA - โ–SPAR - โ–REQUEST - โ–WORN - โ–REMARKABLE - โ–LOAD - โ–WORSHIP - โ–PARK - โ–KI - โ–INTERRUPTED - โ–SKILL - โ–TERM - LAC - โ–CRITIC - โ–DISTRESS - โ–BELIEF - โ–STERN - IGHT - โ–TRACK - โ–HUNTING - โ–JEWEL - โ–GRADUALLY - โ–GLOW - โ–RUSHED - โ–MENTAL - โ–VISITOR - โ–PICKED - โ–BEHOLD - โ–EXPRESSED - โ–RUB - โ–SKI - ARTAGNAN - โ–MOREOVER - โ–OPERATION - โ–CAREFUL - โ–KEEN - โ–ASSERT - โ–WANDER - โ–ENEMIES - โ–MYSTERIOUS - โ–DEPTH - โ–PREFER - โ–CROSSED - โ–CHARMING - โ–DREAD - โ–FLOUR - โ–ROBIN - โ–TRE - โ–RELIEF - โ–INQUIRED - โ–APPLE - โ–HENCE - โ–WINGS - โ–CHOICE - โ–JUD - OO - โ–SPECIES - โ–DELIGHTED - IUM - โ–RAPID - โ–APPEAL - โ–FAMOUS - โ–USEFUL - โ–HELEN - โ–NEWSPAPER - โ–PLENTY - โ–BEARING - โ–NERVOUS - โ–PARA - โ–URGE - โ–ROAR - โ–WOUNDED - โ–CHAIN - โ–PRODUCE - โ–REFLECTION - โ–MERCHANT - โ–QUARREL - โ–GLORY - โ–BEGUN - โ–BARON - CUS - โ–QUEER - โ–MIX - โ–GAZE - โ–WHISPER - โ–BURIED - โ–DIV - โ–CARD - โ–FREQUENTLY - โ–TIP - โ–KNEE - โ–REGION - โ–ROOT - โ–LEST - โ–JEALOUS - CTOR - โ–SAVED - โ–ASKING - โ–TRIP - QUA - โ–UNION - HY - โ–COMPANIONS - โ–SHIPS - โ–HALE - โ–APPROACHED - โ–HARRY - โ–DRUNK - โ–ARRIVAL - โ–SLEPT - โ–FURNISH - HEAD - โ–PIG - โ–ABSENCE - โ–PHIL - โ–HEAP - โ–SHOES - โ–CONSCIOUSNESS - โ–KINDLY - โ–EVIDENT - โ–SCAR - โ–DETERMIN - โ–GRASP - โ–STEAL - โ–OWE - โ–KNIFE - โ–PRECIOUS - โ–ELEMENT - โ–PROCEEDED - โ–FEVER - โ–LEADER - โ–RISK - โ–EASE - โ–GRIM - โ–MOUNT - โ–MEANWHILE - โ–CENTURY - OON - โ–JUDGMENT - โ–AROSE - โ–VISION - โ–SPARE - โ–EXTREME - โ–CONSTANT - โ–OBSERVATION - โ–THRUST - โ–DELAY - โ–CENT - โ–INCLUD - โ–LIFT - โ–ADMIRE - โ–ISSUE - โ–FRIENDSHIP - โ–LESSON - โ–PRINCIPAL - โ–MOURN - โ–ACCEPTED - โ–BURNING - โ–CAPABLE - โ–EXTRAORDINARY - โ–SANG - โ–REMOVED - โ–HOPED - โ–HORN - โ–ALICE - โ–MUD - โ–APARTMENT - โ–FIGHTING - โ–BLAME - โ–TREMBLING - โ–SOMEBODY - โ–ANYONE - โ–BRIDE - โ–READER - โ–ROB - โ–EVERYWHERE - โ–LABOUR - โ–RECALL - โ–BULL - โ–HIT - โ–COUNCIL - โ–POPULAR - โ–CHAP - โ–TRIAL - โ–DUN - โ–WISHES - โ–BRILLIANT - โ–ASSURED - โ–FORGOT - โ–CONTINUE - โ–ACKNOWLEDG - โ–RETREAT - โ–INCREASED - โ–CONTEMPT - โ–GRANDFATHER - โ–SYMPATHY - โ–GHOST - โ–STRETCHED - โ–CREATURES - โ–CAB - โ–HIND - โ–PLAYING - โ–MISERABLE - โ–MEMBERS - โ–KINDNESS - โ–HIGHEST - โ–PRIM - โ–KISSED - โ–DESERVE - โ–HUT - โ–BEGGED - โ–EIGHTY - โ–CLOSELY - โ–WONDERED - โ–MILITARY - โ–REMIND - โ–ACCORDINGLY - โ–LARGER - โ–MAINTAIN - โ–ENGINE - โ–MOTIVE - โ–DESTROY - โ–STRIP - โ–HANS - โ–AHEAD - โ–INFINITE - โ–PROMPT - โ–INFORMED - TTLE - โ–PEER - โ–PRESSED - โ–TRAP - โ–SOMEWHERE - โ–BOUGHT - โ–VISIBLE - โ–ASHAMED - โ–TEAR - โ–NEIGHBOUR - โ–CONSTITUTION - โ–INTELLIGENCE - โ–PROFESSION - โ–HUNGRY - RIDGE - โ–SMELL - โ–STORIES - โ–LISTENING - โ–APPROACH - โ–STRING - โ–EXPLANATION - โ–IMMENSE - โ–RELIGIOUS - โ–THROUGHOUT - โ–HOLLOW - โ–AWAIT - โ–FLYING - โ–SCREAM - โ–ACTIVE - โ–RUM - โ–PRODUCT - โ–UNHAPPY - โ–VAGUE - ARIES - โ–ELIZABETH - โ–STUPID - โ–DIGNITY - โ–ISABEL - GAR - โ–BRO - โ–PITCH - โ–COMRADE - โ–STIFF - โ–RECKON - โ–SOLD - โ–SPARK - โ–STRO - โ–CRYING - โ–MAGIC - โ–REPEAT - PORT - โ–MARKED - โ–COMFORTABLE - โ–PROJECT - โ–BECOMING - โ–PARENTS - โ–SHELTER - โ–STOLE - โ–HINT - โ–NEST - โ–TRICK - โ–THOROUGHLY - โ–HOSPITAL - โ–WEAPON - โ–ROME - โ–STYLE - โ–ADMITTED - โ–SAFETY - FIELD - โ–UNDERSTANDING - โ–TREMBLE - โ–PRINT - โ–SLAVES - โ–WEARY - โ–ARTIST - โ–CREDIT - BURG - โ–CONCLUSION - โ–SELDOM - โ–UNUSUAL - โ–CLOUDS - โ–UNABLE - โ–GAY - โ–HANGING - โ–SCR - โ–BOWED - โ–DAVID - โ–VOL - โ–PUSHED - โ–ESCAPED - MOND - โ–WARN - โ–BETRAY - โ–EGGS - โ–PLAINLY - โ–EXHIBIT - โ–DISPLAY - โ–MEMBER - โ–GRIN - โ–PROSPECT - โ–BRUSH - โ–BID - โ–SUCCESSFUL - โ–EXTENT - โ–PERSUADE - โ–MID - โ–MOOD - โ–ARRANGED - โ–UNIVERSAL - โ–JIM - โ–SIGNAL - โ–WHILST - โ–PHILIP - โ–WOLF - RATE - โ–EAGERLY - โ–BILLY - โ–RETURNING - โ–CONSCIENCE - โ–FORTUNATE - โ–FEMALE - โ–GLEAM - โ–HASTILY - โ–PROVIDED - โ–OBTAIN - โ–INSTINCT - โ–CONCERNED - โ–CONCERNING - โ–SOMEHOW - โ–PINK - โ–RAGE - โ–ACCUSTOMED - โ–UNCONSCIOUS - โ–ADVISE - โ–BRANCHES - โ–TINY - โ–REFUSE - โ–BISHOP - โ–SUPPLY - โ–PEASANT - โ–LAWYER - โ–WASTE - โ–CONNECTION - โ–DEVELOP - โ–CORRESPOND - โ–PLUM - โ–NODDED - โ–SLIPPED - โ–EU - โ–CONSTANTLY - CUM - MMED - โ–FAIRLY - HOUSE - โ–KIT - โ–RANG - โ–FEATURES - โ–PAUSE - โ–PAINFUL - โ–JOE - โ–WHENCE - โ–LAUGHTER - โ–COACH - โ–CHRISTMAS - โ–EATING - โ–WHOLLY - โ–APART - โ–SUPER - โ–REVOLUTION - โ–LONELY - โ–CHEEKS - โ–THRONE - โ–CREW - โ–ATTAIN - โ–ESTABLISHED - TIME - โ–DASH - โ–FRIENDLY - โ–OPERA - โ–EARL - โ–EXHAUST - โ–CLIFF - โ–REVEAL - โ–ADOPT - โ–CENTRE - โ–MERRY - โ–SYLVIA - โ–IDEAL - โ–MISFORTUNE - โ–FEAST - โ–ARAB - โ–NUT - โ–FETCH - โ–FOUGHT - โ–PILE - โ–SETTING - โ–SOURCE - โ–PERSIST - โ–MERCY - โ–BARK - โ–LUC - โ–DEEPLY - โ–COMPARE - โ–ATTITUDE - โ–ENDURE - โ–DELIGHTFUL - โ–BEARD - โ–PATIENCE - โ–LOCAL - โ–UTTERED - โ–VICTORY - โ–TREATED - โ–SEPARATE - โ–WAG - โ–DRAGG - โ–TITLE - โ–TROOPS - โ–TRIUMPH - โ–REAR - โ–GAINED - โ–SINK - โ–DEFEND - โ–TIED - โ–FLED - โ–DARED - โ–INCREASE - โ–POND - โ–CONQUER - โ–FOREHEAD - โ–FAN - โ–ANXIETY - โ–ENCOUNTER - โ–SEX - โ–HALT - โ–SANK - โ–CHEEK - โ–HUMBLE - โ–WRITER - โ–EMPLOYED - โ–DISTINGUISHED - โ–RAISE - โ–WHIP - โ–GIANT - โ–RANGE - โ–OBTAINED - โ–FLAG - โ–MAC - โ–JUMPED - โ–DISCOVERY - โ–NATIONAL - โ–COMMISSION - โ–POSITIVE - โ–LOVING - โ–EXACT - โ–MURMURED - โ–GAZED - โ–REFER - โ–COLLEGE - โ–ENCOURAGE - โ–NOVEL - โ–CLOCK - โ–MORTAL - โ–ROLLED - โ–RAT - IZING - โ–GUILTY - โ–VICTOR - WORTH - โ–PRA - โ–APPROACHING - โ–RELATIVE - โ–ESTATE - โ–UGLY - โ–METAL - โ–ROBERT - โ–TENT - โ–ADMIRATION - โ–FOURTEEN - โ–BARBAR - โ–WITCH - ELLA - โ–CAKE - โ–SHONE - โ–MANAGED - โ–VOLUME - โ–GREEK - โ–DANCING - โ–WRETCHED - โ–CONDEMN - โ–MAGNIFICENT - โ–CONSULT - J - โ–ORGAN - โ–FLEET - โ–ARRANGEMENT - โ–INCIDENT - โ–MISERY - โ–ARROW - โ–STROKE - โ–ASSIST - โ–BUILD - โ–SUCCEED - โ–DESPERATE - โ–WIDOW - UDE - โ–MARKET - โ–WISDOM - โ–PRECISE - โ–CURRENT - โ–SPOIL - โ–BADE - โ–WOODEN - โ–RESIST - โ–OBVIOUS - โ–SENSIBLE - FALL - โ–ADDRESSED - โ–GIL - โ–COUNSEL - โ–PURCHASE - โ–SELECT - โ–USELESS - โ–STARED - โ–ARREST - โ–POISON - โ–FIN - โ–SWALLOW - โ–BLOCK - โ–SLID - โ–NINETY - โ–SPORT - โ–PROVIDE - โ–ANNA - โ–LAMB - โ–INTERVAL - โ–JUMP - โ–DESCRIBED - โ–STRIKING - โ–PROVISION - โ–PROPOSED - โ–MELANCHOLY - โ–WARRIOR - โ–SUGGEST - โ–DEPARTURE - โ–BURDEN - โ–LIMB - โ–TROUBLED - โ–MEADOW - โ–SACRED - โ–SOLID - โ–TRU - โ–LUCY - โ–RECOVER - โ–ENERGY - โ–POWDER - โ–RESUMED - โ–INTENSE - โ–BRITISH - โ–STRAW - โ–AGREEABLE - โ–EVERYONE - โ–CONCERN - โ–VOYAGE - โ–SOUTHERN - โ–BOSOM - โ–UTTERLY - โ–FEED - โ–ESSENTIAL - โ–CONFINE - โ–HOUSEHOLD - โ–EXTREMELY - โ–WONDERING - โ–LIST - โ–PINE - PHA - โ–EXPERIMENT - โ–JOSEPH - โ–MYSTERY - โ–RESTORE - โ–BLUSH - FOLD - โ–CHOSEN - โ–INTELLECT - โ–CURTAIN - OLOGY - โ–MOUNTED - โ–LAP - โ–EPI - โ–PUNISH - โ–WEDDING - โ–RECOGNIZED - โ–DRIFT - โ–PREPARATION - โ–RESOLUTION - โ–OPPRESS - โ–FIX - โ–VICTIM - OGRAPH - โ–SUMMON - โ–JULIA - โ–FLOOD - โ–WAL - ULATION - โ–SLIGHTLY - โ–LODGE - โ–WIRE - โ–CONFUSION - โ–UNEXPECTED - โ–CONCEIVE - โ–PRIZE - โ–JESUS - โ–ADDITION - โ–RUDE - โ–FATAL - โ–CARELESS - โ–PATCH - โ–KO - โ–CATHERINE - โ–PARLIAMENT - โ–PROFOUND - โ–ALOUD - โ–RELIEVE - โ–PUSH - ABILITY - โ–ACCOMPANIED - โ–SOVEREIGN - โ–SINGULAR - โ–ECHO - โ–COMPOSED - โ–SHAKING - ATORY - โ–ASSISTANCE - โ–TEACHER - โ–HORRIBLE - โ–STRICT - โ–VERSE - โ–PUNISHMENT - โ–GOWN - โ–MISTAKEN - โ–VARI - โ–SWEPT - โ–GESTURE - โ–BUSH - โ–STEEL - โ–AFFECTED - โ–DIRECTED - โ–SURROUNDED - โ–ABSURD - โ–SUGAR - โ–SCRAP - โ–IMMEDIATE - โ–SADDLE - โ–TY - โ–ARISE - โ–SIGHED - โ–EXCHANGE - โ–IMPATIENT - โ–SNAP - โ–EMBRACE - โ–DISEASE - โ–PROFIT - โ–RIDING - โ–RECOVERED - โ–GOVERN - โ–STRETCH - โ–CONVINCED - โ–LEANING - โ–DOMESTIC - โ–COMPLEX - โ–MANIFEST - โ–INDULGE - โ–GENIUS - โ–AGENT - โ–VEIL - โ–DESCRIPTION - โ–INCLINED - โ–DECEIVE - โ–DARLING - โ–REIGN - HU - โ–ENORMOUS - โ–RESTRAIN - โ–DUTIES - BURY - TTERED - โ–POLE - โ–ENABLE - โ–EXCEPTION - โ–INTIMATE - โ–COUNTESS - โ–TRIBE - โ–HANDKERCHIEF - โ–MIDNIGHT - โ–PROBLEM - โ–TRAMP - โ–OIL - CAST - โ–CRUSH - โ–DISCUSS - โ–RAM - โ–TROT - โ–UNRE - โ–WHIRL - โ–LOCKED - โ–HORIZON - โ–OFFICIAL - โ–SCHEME - โ–DROWN - โ–PIERRE - โ–PERMITTED - โ–CONNECTED - โ–ASSURE - โ–COCK - โ–UTMOST - โ–DEVOTED - โ–RELI - โ–SUFFICIENTLY - โ–INTELLECTUAL - โ–CARPET - โ–OBJECTION - โ–AFTERWARD - โ–REALITY - โ–NEGRO - โ–RETAIN - โ–ASCEND - โ–CEASE - โ–KATE - โ–MARVEL - KO - โ–BOND - MOST - โ–COAL - GATE - โ–IGNORANT - โ–BREAKING - โ–TWIN - โ–ASTONISHMENT - โ–COFFEE - โ–JAR - โ–CITIES - โ–ORIGIN - โ–EXECUT - โ–FINAL - โ–INHABITANTS - โ–STABLE - โ–CHIN - โ–PARTIES - โ–PLUNGE - โ–GENEROUS - โ–DESCRIBE - โ–ANNOUNCED - โ–MERIT - โ–REVERE - โ–ERE - ACIOUS - ZI - โ–DISAPPOINT - โ–SUGGESTION - โ–DOUBTLESS - โ–TRUNK - โ–STAMP - โ–JOB - โ–APPOINTED - โ–DIVIDED - โ–ACQUAINTED - CHI - โ–ABSOLUTE - โ–FEARFUL - โ–PRIVILEGE - โ–CRAFT - โ–STEEP - โ–HUNTER - โ–FORBID - โ–MODEST - โ–ENDEAVOUR - โ–SWEEP - โ–BEHELD - โ–ABSORB - โ–CONSTRUCT - โ–EMPIRE - โ–EXPEDITION - โ–ERECT - โ–OFFEND - โ–INTEND - โ–PERMIT - โ–DESTROYED - โ–CONTRACT - โ–THIRST - โ–WAGON - โ–EVA - โ–GLOOM - โ–ATMOSPHERE - โ–RESERVE - โ–VOTE - โ–GER - โ–NONSENSE - โ–PREVAIL - โ–QUALITY - โ–CLASP - โ–CONCLUDED - โ–RAP - โ–KATY - โ–ETERNAL - โ–MUTTERED - โ–NEGLECT - โ–SQUIRE - โ–CREEP - LOCK - โ–ELECTRIC - โ–HAY - โ–EXPENSE - โ–SCORN - โ–RETIRED - โ–STOUT - โ–MURMUR - โ–SHARPLY - โ–DISTRICT - โ–LEAF - โ–FAILURE - WICK - โ–JEAN - โ–NUMEROUS - โ–INFANT - โ–REALIZED - โ–TRAVELLER - โ–HUNGER - โ–JUNE - โ–MUN - โ–RECOMMEND - โ–CREP - ZZLE - โ–RICHARD - WORK - โ–MONTE - โ–PREACH - โ–PALM - AVI - โ–ANYWHERE - โ–DISPOSITION - โ–MIRROR - โ–VENTURE - โ–POUND - โ–CIGAR - โ–INVITED - โ–BENCH - โ–PROTECTION - โ–BENEFIT - โ–THOMAS - โ–CLERK - โ–REPROACH - โ–UNIFORM - โ–GENERATION - โ–SEAL - โ–COMPASS - โ–WARNING - โ–EXTENDED - โ–DIFFICULTIES - โ–MAYBE - โ–GROAN - โ–AFFECT - โ–COMB - โ–EARN - โ–WESTERN - โ–IDLE - โ–SCORE - โ–TAP - โ–ASTONISHED - โ–INTRODUCED - โ–LEISURE - โ–LIEUTENANT - โ–VIOLENCE - โ–FIRMLY - โ–MONSTER - โ–UR - โ–PROPERLY - โ–TWIST - โ–PIRATE - โ–ROBBER - โ–BATTER - โ–WEPT - โ–LEANED - โ–FOG - โ–ORNAMENT - โ–ANDREW - โ–BUSHES - โ–REPUBLIC - โ–CONFIDENT - โ–LEAN - โ–DART - โ–STOOP - โ–CURL - โ–COUNTER - โ–NORTHERN - โ–PEARL - โ–NEAREST - โ–FRANCIS - โ–WANDERING - โ–FREQUENT - โ–STARTLED - โ–STATEMENT - โ–OCCUR - โ–BLOOM - โ–NERVE - โ–INSPECT - โ–INDUCE - โ–FLATTER - โ–DATE - โ–AMBITION - โ–SLOPE - โ–MALE - โ–MADAM - โ–MONK - โ–RENT - โ–CONFIRM - โ–INVESTIGAT - โ–RABBIT - โ–REGIMENT - โ–SUBMIT - โ–SPELL - โ–FURIOUS - โ–RAIL - โ–BESTOW - โ–RALPH - โ–SCATTERED - โ–COMPELLED - โ–THREAD - โ–CHILL - โ–DENY - โ–PRONOUNC - โ–MANKIND - โ–CATTLE - โ–EXECUTION - โ–REBEL - โ–SUPREME - โ–VALUABLE - โ–LIKEWISE - โ–CONVEY - โ–TIDE - โ–GLOOMY - โ–COIN - โ–ACTUAL - โ–TAX - โ–PROVINCE - โ–GRATEFUL - โ–SPIRITUAL - โ–VANISHED - โ–DIANA - โ–HAUNT - โ–DRAGON - โ–CRAWL - โ–CHINA - โ–GRATITUDE - โ–NEAT - โ–FINISH - โ–INTENT - โ–FRIGHT - โ–EMBARRASS - โ–THIRTEEN - โ–RUTH - โ–SLIGHTEST - โ–DEVELOPMENT - โ–INTERVIEW - โ–SPECTACLE - โ–BROOK - VIE - โ–WEAKNESS - โ–AUDIENCE - โ–CONSEQUENTLY - โ–ABROAD - โ–ASPECT - โ–PAINTED - โ–RELEASE - โ–INSULT - โ–SOOTH - โ–DISAPPOINTMENT - โ–EMERG - โ–BRIG - โ–ESTEEM - โ–INVITATION - โ–PASSENGER - โ–PUBLISH - โ–PIANO - โ–IRISH - โ–DESK - โ–BEATEN - โ–FIFTH - โ–IMPULSE - โ–SWEAR - โ–EATEN - โ–PURPLE - โ–COMMITTED - โ–COUNTRIES - โ–PERCEIVE - ISON - โ–CELEBRAT - โ–GRANDMOTHER - โ–SHUDDER - โ–SUNSHINE - โ–SPANISH - โ–HITHERTO - โ–MARILLA - โ–SNAKE - โ–MOCK - โ–INTERFERE - โ–WALTER - โ–AMID - โ–MARBLE - โ–MISSION - TERIOR - โ–DRIVING - โ–FURNITURE - โ–STEADY - โ–CIRCUMSTANCE - โ–INTERPRET - โ–ENCHANT - โ–ERROR - โ–CONVICTION - โ–HELPLESS - โ–MEDICINE - โ–QUALITIES - โ–ITALIAN - โ–HASTENED - โ–OCCASIONALLY - โ–PURSUED - โ–HESITATED - โ–INDEPENDENT - โ–OLIVER - โ–LINGER - UX - โ–EXAMINED - โ–REPENT - โ–PHYSICIAN - โ–CHASE - โ–BELOVED - โ–ATTACHED - โ–FLORENCE - โ–HONEY - โ–MOUSE - โ–CRIES - โ–BAKE - โ–POEM - โ–DESTRUCTION - โ–FULFIL - โ–MESSENGER - โ–TRISTRAM - โ–FANCIED - โ–EXCESS - โ–CURSE - โ–CHU - โ–QUANTITY - โ–THORNTON - โ–CREATED - โ–CONTINUALLY - โ–LIGHTNING - โ–BORNE - โ–TOTAL - โ–DISPOSED - โ–RIFLE - โ–POLLY - โ–GOAT - โ–BACKWARD - โ–VIRGINIA - โ–KICK - โ–PERIL - โ–QUO - โ–GLORIOUS - โ–MULTITUDE - โ–LEATHER - โ–ABSENT - โ–DEMON - โ–DEBT - โ–TORTURE - โ–ACCORD - โ–MATE - โ–CATHOLIC - โ–PILL - โ–LIBRARY - โ–PURSUIT - โ–SHIRT - โ–DEAREST - โ–COLLAR - โ–BEACH - โ–ROBE - โ–DECLARE - โ–BRANCH - โ–TEMPT - โ–STEADILY - โ–DISGUST - โ–SILLY - โ–ARRIVE - โ–DRANK - โ–LEVI - โ–COMMUNICAT - โ–RACHEL - โ–WASHINGTON - โ–RESIGN - โ–MEANTIME - โ–LACE - โ–ENGAGEMENT - โ–QUIVER - โ–SEPARATED - โ–DISCUSSION - โ–VENTURED - โ–SURROUNDING - โ–POLISH - โ–NAIL - โ–SWELL - โ–JOKE - โ–LINCOLN - โ–STUDENT - โ–GLITTER - โ–RUSSIAN - โ–READILY - โ–CHRIS - โ–POVERTY - โ–DISGRACE - โ–CHEESE - โ–HEAVILY - โ–SCALE - โ–STAFF - โ–ENTREAT - โ–FAREWELL - โ–LUNCH - โ–PEEP - โ–MULE - โ–SOMEONE - โ–DISAPPEAR - โ–DECISION - โ–PISTOL - โ–PUN - โ–SPUR - โ–ASSUMED - โ–EXTEND - โ–ENTHUSIASM - โ–DEFINITE - โ–UNDERTAKE - โ–COMMITTEE - โ–SIMON - โ–FENCE - โ–APPLIED - โ–RELATED - โ–VICE - โ–UNPLEASANT - โ–PROBABLE - โ–PROCURE - โ–FROWN - โ–CLOAK - โ–HUMANITY - โ–FAMILIES - โ–PHILOSOPHER - โ–DWARF - โ–OVERCOME - โ–DEFEAT - โ–FASTENED - โ–MARSH - โ–CLASSES - โ–TOMB - โ–GRACIOUS - โ–REMOTE - โ–CELL - โ–SHRIEK - โ–RESCUE - โ–POOL - โ–ORGANIZ - โ–CHOSE - โ–CUTTING - โ–COWARD - โ–BORDER - โ–DIRTY - โ–MONKEY - โ–HOOK - โ–CHUCK - โ–EMILY - โ–JEST - โ–PLAC - โ–WEIGH - โ–ASSOCIATE - โ–GLIMPSE - โ–STUCK - โ–BOLT - โ–MURDERER - โ–PONY - โ–DISTINGUISH - โ–INSTITUTION - โ–CUNNING - โ–COMPLIMENT - โ–APPETITE - โ–REPUTATION - โ–FEEBLE - โ–KIN - โ–SERIES - โ–GRACEFUL - โ–PLATFORM - โ–BREEZE - โ–PHRASE - โ–CLAY - MONT - โ–RATTL - โ–OPPOSITION - โ–LANE - โ–BOAST - โ–GROWTH - โ–INCLINATION - โ–BEHAVE - โ–SUSAN - โ–DISTINCTION - โ–DISLIKE - โ–NICHOLAS - โ–SATISFY - โ–DRAMA - โ–ELBOW - โ–GAZING - โ–CONSUM - โ–SPIN - โ–OATH - โ–CHANNEL - โ–CHARACTERISTIC - โ–SPEAR - โ–SLAIN - โ–SAUCE - โ–FROG - โ–CONCEPTION - โ–TIMID - โ–ZEAL - โ–APPARENT - SHIRE - โ–CENTER - โ–VARIETY - โ–DUSK - โ–APT - โ–COLUMN - โ–REVENGE - โ–RIVAL - โ–IMITAT - โ–PASSIONATE - โ–SELFISH - โ–NORMAN - โ–REPAIR - โ–THRILL - โ–TREATMENT - โ–ROSA - โ–MARTIN - โ–INDIFFERENT - โ–THITHER - โ–GALLANT - โ–PEPPER - โ–RECOLLECT - โ–VINE - โ–SCARCE - โ–SHIELD - โ–MINGLED - CLOSE - โ–HARSH - โ–BRICK - โ–HUMOR - โ–MISCHIEF - โ–TREMENDOUS - โ–FUNCTION - โ–SMART - โ–SULTAN - โ–DISMISS - โ–THREATENED - โ–CHEAP - โ–FLOCK - โ–ENDEAVOR - โ–WHISK - โ–ITALY - โ–WAIST - โ–FLUTTER - โ–SMOKING - โ–MONARCH - โ–AFRICA - โ–ACCUSE - โ–HERBERT - โ–REFRESH - โ–REJOICE - โ–PILLOW - โ–EXPECTATION - โ–POETRY - โ–HOPELESS - โ–PERISH - โ–PHILOSOPHY - โ–WHISTLE - โ–BERNARD - โ–LAMENT - โ–IMPROVE - โ–SUP - โ–PERPLEX - โ–FOUNTAIN - โ–LEAGUE - โ–DESPISE - โ–IGNORANCE - โ–REFERENCE - โ–DUCK - โ–GROVE - โ–PURSE - โ–PARTNER - โ–PROPHET - โ–SHIVER - โ–NEIGHBOURHOOD - โ–REPRESENTATIVE - SAIL - โ–WIP - โ–ACQUIRED - โ–CHIMNEY - โ–DOCTRINE - โ–MAXIM - โ–ANGLE - โ–MAJORITY - โ–AUTUMN - โ–CONFUSED - โ–CRISTO - โ–ACHIEVE - โ–DISGUISE - โ–REDUCED - โ–EARLIER - โ–THEATRE - โ–DECIDE - MINATED - OLOGICAL - โ–OCCUPATION - โ–VIGOROUS - โ–CONTINENT - โ–DECLINE - โ–COMMUNITY - โ–MOTIONLESS - โ–HATRED - โ–COMMUNICATION - โ–BOWL - โ–COMMENT - โ–APPROVE - โ–CEREMONY - โ–CRIMINAL - โ–SCIENTIFIC - โ–DUCHESS - โ–VIVID - โ–SHIFT - โ–AVAIL - โ–DAMP - โ–JOHNSON - โ–SLENDER - โ–CONTRAST - โ–AMUSEMENT - โ–PLOT - โ–LYN - โ–ASSOCIATION - โ–SNATCH - โ–UNCERTAIN - โ–PRESSURE - โ–PERCH - โ–APPLY - โ–PLANET - โ–NOTWITHSTANDING - โ–SWUNG - โ–STIRRED - โ–ATTENDANT - โ–ENJOYMENT - โ–WORRY - โ–ALBERT - โ–NAKED - โ–TALENT - โ–MARIAN - โ–REFORM - โ–DELIBERATE - โ–INTELLIGENT - โ–SENSITIVE - โ–YONDER - โ–PUPIL - โ–FRIGHTFUL - โ–DOUBTFUL - โ–STANDARD - โ–MAGISTRATE - โ–SHEPHERD - โ–STOMACH - โ–DEPOSIT - โ–RENEW - โ–HEDGE - โ–FRANCS - โ–POSSIBILITY - โ–RESEMBLE - โ–FATIGUE - โ–PORTRAIT - โ–FAVORITE - โ–CREAM - โ–BURG - โ–SECRETARY - โ–DIVERS - โ–ACTIVITY - โ–SPECULAT - โ–HUMOUR - โ–FITTED - โ–EXTERNAL - โ–CETERA - โ–WRAPPED - โ–WHIT - โ–FRED - โ–EXAMINATION - โ–LODGING - โ–OWING - โ–JAW - โ–CROW - โ–BALANCE - โ–PUFF - โ–TENDERNESS - โ–PORTHOS - โ–ANCHOR - โ–INTERRUPT - โ–NECESSARILY - โ–PERPETUAL - โ–AGONY - โ–POPE - โ–SCHOLAR - โ–SCOTLAND - โ–SUPPRESS - โ–WRATH - โ–WRECK - โ–EXCEED - โ–PERFECTION - โ–INDIA - โ–TRADITION - โ–SECTION - โ–EASTERN - โ–DOORWAY - โ–WIVES - โ–CONVENTION - โ–ANNOUNC - โ–EGYPT - โ–CONTRADICT - โ–SCRATCH - โ–CENTRAL - โ–GLOVE - โ–WAX - โ–PREPARE - โ–ACCOMPANY - โ–INCREASING - โ–LIBERAL - โ–RAISING - โ–ORANGE - โ–SHOE - โ–ATTRIBUTE - โ–LITERATURE - โ–PUZZLED - โ–WITHDRAW - โ–WHITHER - โ–HAWK - โ–MOONLIGHT - โ–EXAMINE - โ–HAPPILY - โ–PRECEDE - โ–DETECTIVE - โ–INCHES - โ–SOLITARY - โ–DUTCH - โ–NAPOLEON - โ–UNEASY - โ–CARDINAL - โ–BLEW - โ–FOWL - โ–DECORAT - โ–CHILDHOOD - โ–TORMENT - โ–LOSING - โ–PERMISSION - โ–BLANK - โ–UPSTAIRS - โ–CAPACITY - โ–TRIFLE - โ–FOLLY - โ–RECOGNIZE - โ–REMOVE - โ–VENGEANCE - โ–ENTERPRISE - โ–BEDROOM - โ–ANYHOW - โ–INQUIRY - โ–ASHES - โ–DRAG - โ–HUSH - โ–AWKWARD - โ–SATURDAY - โ–GENUINE - โ–SURVIV - โ–SKIRT - โ–AFFECTIONATE - โ–TANG - โ–MUTUAL - โ–DISPUTE - โ–EAGLE - โ–INCOME - โ–BIND - โ–FAME - โ–IMPROVEMENT - ROVING - โ–DIFFER - โ–AWOKE - โ–SLEEVE - โ–SOLITUDE - โ–FAVOURITE - JI - โ–DETECT - โ–COMPREHEND - โ–PREPARING - โ–SERPENT - โ–SUMMIT - โ–KNOT - โ–KNIT - โ–COPY - โ–STOPPING - โ–FADED - โ–HIDEOUS - โ–JULIE - STEAD - โ–SHINE - โ–CONFLICT - โ–PROPOSITION - โ–REFUGE - โ–GALLERY - โ–BUNDLE - โ–AXE - โ–SLAVERY - โ–MASK - โ–ALYOSHA - โ–LADDER - โ–DEPARTMENT - โ–DISCHARGE - โ–DEPRESS - โ–GALLOP - โ–SCARLET - โ–KITTY - โ–RECEIVING - โ–SURRENDER - โ–SUSTAIN - โ–TWILIGHT - โ–CONGRESS - โ–IRELAND - โ–FUNNY - โ–LEND - โ–CONSTITUTE - โ–FUNERAL - โ–CRYSTAL - โ–SPAIN - โ–EXCEEDINGLY - โ–DAMN - โ–COMMUN - โ–CIVILIZATION - โ–PREJUDICE - โ–PORCH - โ–ASSISTANT - โ–INDUSTRY - โ–TUMBLE - โ–DEFENCE - โ–HITHER - โ–SMOT - โ–COLONI - โ–AMAZEMENT - โ–MARGUERITE - โ–MIRACLE - โ–INHERIT - โ–BEGGAR - โ–ENVELOPE - โ–INDIGNATION - โ–NATASHA - โ–PROPOSAL - โ–FRAGMENT - โ–ROUSED - โ–ROAST - ENCIES - โ–COMMENCED - โ–RESOURCE - โ–POPULATION - โ–QUOTH - โ–PURSUE - โ–EDUCAT - โ–AFFLICT - โ–CONTACT - โ–CRIMSON - โ–DIVISION - โ–DISORDER - โ–COPPER - โ–SOLICIT - โ–MODERATE - โ–DRUM - โ–SWIM - โ–SALUTE - โ–ASSUME - โ–MUSCLE - โ–OVERWHELM - โ–SHAKESPEARE - โ–STRUGGLING - โ–TRANQUIL - โ–CHICKEN - โ–TREAD - โ–CLAW - โ–BIBLE - โ–RIDGE - โ–THREAT - โ–VELVET - โ–EXPOSED - โ–IDIOT - โ–BARREL - โ–PENNY - โ–TEMPTATION - โ–DANGLARS - โ–CENTURIES - โ–DISTRIBUT - โ–REJECT - โ–RETORTED - โ–CONCENTRAT - โ–CORDIAL - โ–MOTOR - โ–CANNON - KEEP - โ–WRETCH - โ–ASSURANCE - โ–THIEF - โ–SURVEY - โ–VITAL - โ–RAILWAY - โ–JACKSON - โ–CRASH - โ–GROWL - โ–COMBAT - โ–RECOLLECTION - โ–SECURITY - โ–JACOB - โ–CLUTCH - โ–BLANKET - โ–NANCY - โ–CELLAR - โ–CONVENIENT - โ–INDIGNANT - โ–COARSE - โ–WORM - โ–SCREEN - โ–TRANSPORT - โ–BULLET - โ–APPRECIATE - โ–DEVOTION - โ–INVISIBLE - โ–DRIED - โ–MIXTURE - โ–CANDID - โ–PERFORMANCE - โ–RIPE - โ–EXQUISITE - โ–BARGAIN - โ–TOBACCO - โ–LOYAL - โ–MOULD - โ–ATTENTIVE - โ–DOROTHY - โ–BRUTE - โ–ESTABLISHMENT - โ–ABILITY - โ–INHABIT - โ–OBSCURE - โ–BORROW - โ–ESSENCE - โ–DISMAY - โ–FLEE - โ–BLADE - โ–PLUCK - โ–COFFIN - โ–SUNSET - โ–STEPHEN - โ–ECONOMIC - โ–HOLIDAY - โ–MECHANICAL - โ–COTTON - โ–AWAKENED - โ–SEIZE - โ–RIDICULOUS - โ–SANCHO - โ–HESITATION - โ–CORPSE - โ–SAVING - HOLD - FOOT - โ–ELDEST - โ–DESPITE - โ–EDITH - โ–CHERISH - โ–RESISTANCE - โ–WILSON - โ–ARGUE - โ–INQUIRE - โ–APPREHENSION - โ–AVENUE - โ–DRAKE - โ–PROPOSE - HURST - โ–INFERIOR - โ–STAIRCASE - โ–WHEREFORE - โ–CARLYLE - โ–COUCH - โ–ROUTE - โ–POLITICS - โ–TOMORROW - โ–THRONG - โ–NAUGHT - โ–SUNLIGHT - โ–INDIFFERENCE - โ–OBEDIENCE - โ–RECEPTION - โ–VEGETABLE - โ–IMPERFECT - โ–RESIDENCE - โ–TURKEY - โ–VIOLET - โ–SARAH - โ–ALTAR - โ–GRIEVE - โ–JERK - โ–ENSU - โ–MAGICIAN - โ–BLOSSOM - โ–LANTERN - โ–RESOLUTE - โ–THOUGHTFULLY - โ–FORTNIGHT - โ–TRUMPET - โ–VALJEAN - โ–UNWILLING - โ–LECTURE - โ–WHEREUPON - โ–HOLLAND - โ–CHANGING - โ–CREEK - โ–SLICE - โ–NORMAL - โ–ANNIE - โ–ACCENT - โ–FREDERICK - โ–DISAGREEABLE - โ–RUBBED - โ–DUMB - โ–ESTABLISH - โ–IMPORT - โ–AFFIRM - โ–MATTHEW - โ–BRISK - โ–CONVERT - โ–BENDING - โ–IVAN - โ–MADEMOISELLE - โ–MICHAEL - โ–EASIER - โ–JONES - โ–FACING - โ–EXCELLENCY - โ–LITERARY - โ–GOSSIP - โ–DEVOUR - โ–STAGGER - โ–PENCIL - โ–AVERAGE - โ–HAMMER - โ–TRIUMPHANT - โ–PREFERRED - โ–APPLICATION - โ–OCCUPY - โ–AUTHORITIES - BURN - โ–ASCERTAIN - โ–CORRIDOR - โ–DELICIOUS - โ–PRACTISE - โ–UNIVERSE - โ–SHILLING - โ–CONTEST - โ–ASHORE - โ–COMMIT - โ–ADMINISTRATION - โ–STUDIED - โ–RIGID - โ–ADORN - โ–ELSEWHERE - โ–INNOCENCE - โ–JOURNAL - โ–LANDSCAPE - โ–TELEGRAPH - โ–ANGRILY - โ–CAMPAIGN - โ–UNJUST - โ–CHALLENGE - โ–TORRENT - โ–RELATE - โ–ASSEMBLED - โ–IMPRESSED - โ–CANOE - โ–CONCLUD - โ–QUIXOTE - โ–SATISFACTORY - โ–NIECE - โ–DEAF - โ–RAFT - โ–JIMMY - โ–GLID - โ–REGULAT - โ–CHATTER - โ–GLACIER - โ–ENVY - โ–STATUE - โ–BOSTON - โ–RICHMOND - โ–DENIED - โ–FANNY - โ–SOLOMON - โ–VULGAR - โ–STALK - โ–REPLACE - โ–SPOON - โ–BASIN - โ–FEATURE - โ–CONVICT - โ–ARCHITECT - โ–ADMIRAL - โ–RIBBON - โ–PERMANENT - โ–APRIL - โ–JOLLY - โ–NEIGHBORHOOD - โ–IMPART - BOROUGH - CAMP - โ–HORRID - โ–IMMORTAL - โ–PRUDENCE - โ–SPANIARD - โ–SUPPOSING - โ–TELEPHONE - โ–TEMPERATURE - โ–PENETRATE - โ–OYSTER - โ–APPOINTMENT - โ–EGYPTIAN - โ–DWELT - โ–NEPHEW - โ–RAILROAD - โ–SEPTEMBER - โ–DEVICE - โ–WHEAT - โ–GILBERT - โ–ELEGANT - โ–ADVERTISE - โ–RATIONAL - โ–TURTLE - โ–BROOD - โ–ASSEMBLY - โ–CULTIVATE - โ–EDITOR - โ–SPECIMEN - โ–UNDOUBTEDLY - โ–WHALE - โ–DROPPING - โ–BALLOON - โ–MEDICAL - COMB - โ–COMPOSITION - โ–FOOTSTEPS - โ–LAUNCELOT - โ–DISCOURSE - โ–ERRAND - โ–CONVERSE - โ–ADVANCING - โ–DOWNSTAIRS - โ–TUMULT - โ–CORRUPT - โ–SUFFICE - โ–ANGUISH - โ–SHAGGY - โ–RETIRE - โ–TIMBER - โ–BLAZE - โ–ABSTRACT - โ–EMBROIDER - โ–PHOTOGRAPH - โ–PROSPERITY - โ–TERRIBLY - โ–TERRITORY - โ–THRESHOLD - โ–PAVEMENT - โ–INJURED - โ–LIMP - โ–AGITATION - โ–RASCAL - โ–PRESUME - โ–OBSERVING - โ–OBSTACLE - โ–SIMPLICITY - โ–SLUMBER - โ–SUPPLIED - โ–COMBINATION - โ–DRAIN - โ–WILDERNESS - โ–BELIEVING - โ–VILLAIN - โ–RECKLESS - โ–INJURY - โ–CLAPP - โ–FRIDAY - โ–HERCULES - โ–KENNEDY - โ–SYMPTOM - โ–SLEDGE - โ–CEILING - โ–LEMON - โ–PLAGUE - โ–MONDAY - โ–CANVAS - โ–IMPATIENCE - โ–UNCOMFORTABLE - โ–ACCESS - โ–FROZEN - โ–SENATOR - โ–FRANZ - โ–SWIMMING - โ–BARRIER - โ–ADJUST - โ–COMPARISON - โ–PROCLAIM - โ–WRINKL - โ–OVERLOOK - โ–MITYA - โ–GUILT - โ–PERCEPTION - โ–PRECAUTION - โ–SPECTATOR - โ–SURPRISING - โ–DISTRACT - โ–DISDAIN - โ–BONNET - โ–MAGNET - โ–PROFESS - โ–CONFOUND - โ–NARRATIVE - โ–STRUCTURE - โ–SKETCH - โ–ULTIMATE - โ–GLOBE - โ–INSECT - FICIENCY - โ–ORCHARD - โ–AMIABLE - โ–DESCENT - โ–INDEPENDENCE - โ–MANUFACTURE - โ–SPRINKLE - โ–NIGHTINGALE - โ–CUSHION - โ–EMINENT - โ–SCOTT - โ–ARRAY - โ–COSETTE - โ–WAVING - โ–EXTRACT - โ–IRREGULAR - โ–PERSECUT - โ–DERIVED - โ–WITHDREW - โ–CAUTION - โ–SUSPICIOUS - โ–MEMORIES - โ–NOWHERE - โ–SUBTLE - โ–THOROUGH - Q - โ–APPROPRIATE - โ–SLAUGHTER - โ–YOURSELVES - โ–THUMB - โ–TWAS - โ–ABODE - โ–BIDDING - โ–CONSPICUOUS - โ–REBECCA - โ–SERGEANT - โ–APRON - โ–ANTICIPATE - โ–DISCIPLINE - โ–GLANCING - โ–PILGRIM - โ–SULLEN - โ–CONTRIBUTE - โ–PRAIRIE - โ–CARVED - โ–COMMERCE - โ–EXCLAMATION - โ–MUSCULAR - โ–NOVEMBER - โ–PHENOMENA - โ–SYMBOL - โ–UMBRELLA - โ–DIMINISH - โ–PARLOUR - โ–THREATENING - โ–STUMP - โ–EXTENSIVE - โ–PLEASING - โ–REMEMBRANCE - โ–COMBINED - โ–SHERIFF - โ–SHAFT - โ–LAURA - โ–INTERCOURSE - โ–STRICKEN - โ–SUPPLIES - โ–LANDLORD - โ–SHRINK - โ–PRICK - โ–CAESAR - โ–DRUG - โ–BEWILDERED - โ–NAUTILUS - โ–BRUTAL - โ–COMMERCIAL - โ–MAGGIE - โ–SPHERE - โ–VIRGIN - โ–BRETHREN - โ–DESTINY - โ–POLICY - โ–TERRIFIED - โ–HOUSEKEEPER - โ–CRAZY - โ–ARDENT - โ–DISCERN - โ–WRAP - โ–MARQUIS - โ–RUSSIA - MOUTH - โ–BRITAIN - โ–HARBOUR - โ–CONCERT - โ–DONKEY - โ–DAMAGE - โ–SLIM - ABOUT - โ–LUXURY - โ–MONSTROUS - โ–TENDENCY - โ–PARADISE - โ–CULTURE - โ–JULIUS - โ–RAOUL - โ–REMEDY - โ–DECAY - โ–SCOLD - โ–SPLIT - โ–ASSAULT - โ–DECEMBER - โ–MOSCOW - โ–EXPLORE - โ–TROUSERS - โ–WRIST - PIECE - โ–MUSKET - โ–VALENTINE - โ–TYRANT - โ–ABRAHAM - โ–MEDIUM - โ–ARTIFICIAL - โ–FACULTY - โ–OBLIGATION - โ–RESEMBLANCE - โ–INQUIRIES - โ–DETAIN - โ–SWARM - โ–PLEDGE - โ–ADMIRABLE - โ–DEFECT - โ–SUPERINTEND - โ–PATRIOT - โ–CLUNG - โ–DISMAL - โ–RECIT - โ–IGNOR - โ–AMELIA - โ–JUSTIFY - โ–ELEPHANT - โ–ESTIMATE - โ–KNELT - โ–SERVING - โ–WHIM - โ–SHRILL - โ–STUDIO - โ–TEXT - โ–ALEXANDER - โ–WROUGHT - โ–ABUNDANT - โ–SITUATED - โ–REGAIN - โ–FIERY - โ–SNEER - โ–SWEAT - โ–GLARE - โ–NIGH - โ–ESCORT - โ–INEVITABLE - โ–PSMITH - โ–RELUCTANT - โ–PRECEDING - โ–RESORT - โ–OUTRAGE - โ–AMBASSADOR - โ–CONSOLATION - โ–RECOGNITION - โ–REMORSE - โ–BEHALF - โ–FORMIDABLE - โ–GRAVITY - โ–DIVIDE - โ–CONFRONT - โ–GIGANTIC - โ–OCTOBER - โ–FLANK - โ–SLEW - โ–CLARA - โ–FILM - โ–BULK - โ–POMP - โ–ELEANOR - โ–EMPHASIS - โ–JAPANESE - โ–CAVALRY - โ–EXCLUSIVE - โ–PERFUME - โ–BRONZE - โ–FEDERAL - โ–LIQUID - โ–RUBBING - โ–OVEN - DOLPH - โ–CONVULS - โ–DEPRIVED - โ–RESPONSIBILITY - โ–SIGNIFICANT - โ–WAISTCOAT - โ–CLUSTER - โ–MARTHA - โ–REVERSE - โ–ATTORNEY - โ–DROOP - โ–SKILFUL - โ–HABITUAL - โ–PUMP - โ–INTERVEN - โ–OWL - โ–CONJECTURE - โ–FANTASTIC - โ–RESPONSIBLE - โ–DESTINED - โ–DOCUMENT - โ–THEREUPON - โ–GODDESS - โ–PACIFIC - โ–WARRANT - โ–COSTUME - โ–BRIDLE - โ–CALIFORNIA - โ–DEMOCRATIC - โ–EUSTACE - โ–SQUIRREL - โ–UNCOMMON - โ–MARVELLOUS - โ–PLOUGH - โ–TRAGEDY - โ–VAULT - โ–HESITATE - โ–REFRAIN - โ–ADMIRING - โ–CORPORAL - โ–ENTITLED - โ–SHREWD - โ–SQUEEZ - โ–ACCURATE - โ–TEMPEST - โ–MONUMENT - โ–SIEGE - โ–CHINESE - โ–RAVEN - โ–LOUNG - โ–ASSASSIN - โ–INFLICT - โ–AGITATED - โ–DESIRABLE - โ–EARLIEST - โ–LAUNCH - โ–PILOT - โ–PULSE - โ–MUTE - LEIGH - โ–LIQUOR - โ–SCARECROW - โ–SKULL - โ–DESOLATE - โ–SUBLIME - โ–SERENE - โ–RECESS - โ–WAKING - โ–CHARLOTTE - โ–CIRCULAR - โ–INJUSTICE - โ–PINOCCHIO - โ–PRISCILLA - โ–THYSELF - โ–OCCURRENCE - โ–CASUAL - โ–FRANTIC - โ–LEGEND - โ–FERTIL - โ–BACKGROUND - โ–DELICACY - โ–ESTRALLA - โ–MANUSCRIPT - โ–RESPONSE - โ–UNIVERSITY - โ–WOLVES - โ–SCANDAL - โ–STUMBLE - โ–HOARSE - โ–BODILY - โ–CONVENT - โ–EXAMINING - โ–INCAPABLE - โ–PERCEIVING - โ–PHILADELPHIA - โ–SUBSEQUENT - โ–THIEVES - โ–ACCUMULAT - โ–DAMSEL - โ–SCOTCH - โ–UNDERNEATH - โ–NOBILITY - โ–SMASH - โ–REVOLT - โ–ENGAGE - โ–CATHEDRAL - โ–CHAMPION - โ–DESPATCH - โ–ETERNITY - โ–JANUARY - โ–PLEADED - โ–PROBABILITY - โ–JIMMIE - โ–PARALLEL - โ–FISHERMAN - โ–JERRY - โ–SWORE - โ–DRAUGHT - โ–OPPONENT - โ–PRIMITIVE - โ–SIGNIFICANCE - โ–SUBSTANTIAL - โ–AMAZED - โ–DUNBAR - โ–COMMEND - โ–CONTEMPLATE - โ–TESTIMONY - โ–IMPERIAL - โ–ADAPT - โ–JUICE - โ–CALAMIT - CULAR - โ–CHATEAU - โ–PHOENIX - โ–PRUDENT - โ–SOLUTION - โ–VILLEFORT - โ–REACTION - โ–RELAX - โ–YU - โ–PROHIBIT - โ–DISTRUST - โ–PLUNDER - โ–WELFARE - โ–NAVIGAT - โ–PARLOR - โ–LAZY - โ–DETACH - OMETER - โ–PRIV - โ–DISCOURAGE - โ–OBSTINATE - โ–REJOICING - โ–SERMON - โ–VEHICLE - โ–FANCIES - โ–ENLIGHTEN - โ–ACUTE - โ–ILLUSION - โ–ANTHEA - โ–MARTIAN - โ–EXCITE - โ–GENEROSITY - OLOGIST - โ–AMAZING - โ–UNWORTHY - โ–INTERNAL - โ–INCENSE - โ–VIBRAT - โ–ADHERE - ROACH - โ–FEBRUARY - โ–MEXICAN - โ–POTATOES - โ–INCESSANT - โ–INTERPOSED - โ–PARCEL - โ–VEXED - โ–PROMOTE - MIDST - โ–ARISTOCRAT - โ–CYRIL - โ–EMBARK - โ–ABUNDANCE - โ–LITERALLY - โ–SURGEON - โ–TERRACE - โ–ATLANTIC - โ–MARTYR - โ–SPECK - โ–SENATE - โ–LOAF - โ–ADMINISTER - โ–APPREHEND - โ–SUBDUED - โ–TEMPORARY - โ–DOMINION - โ–ELABORATE - โ–DIGNIFIED - โ–ELIZA - โ–SPLASH - โ–CONSEIL - โ–DEXTER - โ–UNSEEN - โ–TRAGIC - VOCATION - โ–GRATIFY - โ–BACHELOR - โ–DEFENSE - โ–EXCURSION - โ–FACULTIES - โ–PROPRIETOR - โ–SYMPATHETIC - โ–UNNECESSARY - โ–RADIANT - โ–VACANT - โ–OUNCE - โ–SCREW - โ–PHENOMENON - โ–PROMINENT - โ–WORRIED - โ–STUDIES - โ–CLIMATE - โ–KEITH - โ–ARAMIS - โ–BLISS - โ–CONTINUAL - โ–SURPASS - โ–HEBREW - โ–IDENTITY - โ–PROVOKE - โ–TEMPERAMENT - โ–CHARIOT - โ–HARBOR - โ–NINTH - โ–PRIOR - โ–DESIROUS - โ–JERUSALEM - โ–UNDERTAKING - โ–EDISON - โ–MIRTH - โ–SCOUT - โ–APPARATUS - โ–ILLUSTRATION - โ–INTELLIGIBLE - โ–INVARIABLY - โ–PIERCED - โ–REVIEW - โ–FLICKER - โ–HAZARD - โ–REVELATION - โ–DIXON - โ–EXCITING - โ–GOSPEL - โ–CONSTANCE - โ–OVERTAKE - โ–GUINEA - โ–ALADDIN - โ–CHICAGO - โ–TULLIVER - โ–HAMILTON - โ–GARRISON - โ–DISCIPLE - โ–INTENSITY - โ–TRAITOR - โ–CHANCELLOR - โ–PROVERB - โ–DAGGER - โ–FORESEE - โ–CONFIDE - โ–GLIMMER - โ–CHAUVELIN - โ–ILLUSTRATE - โ–VOLUNTEER - โ–JUNGLE - โ–STREAK - โ–SUNRISE - โ–DISSOLV - โ–QUEST - โ–AWHILE - โ–FELICITY - โ–LEGISLATURE - โ–LEONORA - โ–MAGAZINE - โ–PITIFUL - โ–COLONY - โ–SHAWL - โ–ARRIVING - โ–FUNDAMENTAL - โ–CARPENTER - โ–OVERFLOW - โ–EXPAND - โ–HARVEST - โ–FEMININE - โ–INNUMERABLE - โ–SCRAMBLE - โ–TWENTIETH - โ–TRIFLING - โ–GHASTL - โ–CONQUEST - โ–DANIEL - โ–FACILIT - โ–FORSAKE - โ–BEHAVIOUR - โ–GORGEOUS - โ–PRODUCING - โ–HAPPIER - โ–PROMISING - โ–RAINBOW - โ–INSTINCTIVELY - โ–DECREE - โ–EYEBROWS - โ–IRRESISTIBLE - โ–PHARAOH - โ–SCROOGE - โ–UNNATURAL - โ–CRUMBS - โ–REFINED - โ–DREARY - โ–TRENCH - โ–CONVINCE - โ–FRINGE - โ–EXTREMITY - โ–INTIMACY - โ–SCOUNDREL - โ–SUFFRAGE - โ–UNEASINESS - โ–BARRICADE - โ–CIRCULAT - โ–SAMUEL - โ–BRUCE - โ–DARCY - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.3 lsm_weight: 0.1 length_normalized_loss: false use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram5000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: n_fft: 512 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 27 num_freq_mask: 2 apply_time_mask: true time_mask_width_ratio_range: - 0.0 - 0.05 num_time_mask: 10 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz preencoder: null preencoder_conf: {} encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true rel_pos_type: latest pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.7a1 distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
navteca/ms-marco-MiniLM-L-12-v2
navteca
2022-03-14T15:56:35Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "jax", "bert", "text-classification", "en", "license:mit", "region:us" ]
text-classification
2022-03-14T14:52:30Z
--- language: en license: mit pipeline_tag: text-classification tags: - sentence-transformers --- # Cross-Encoder for MS Marco The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Training Data This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. ## Usage The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name', max_length=512) scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2')]) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.
GPL/webis-touche2020-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:25:36Z
119
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:25:34Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/signal1m-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:25:02Z
127
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:25:00Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/nq-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:24:29Z
125
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:24:27Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/arguana-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:22:47Z
121
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:22:45Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/bioasq-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:22:31Z
124
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:22:29Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/trec-covid-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:22:13Z
125
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:22:10Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/cqadupstack-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:18:20Z
114
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:18:17Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/trec-covid-v2-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:18:03Z
127
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:18:01Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
GPL/scifact-distilbert-tas-b-gpl-self_miner
GPL
2022-03-14T14:17:30Z
120
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T14:16:53Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 140000 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 140000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
cptanalatriste/request-for-help
cptanalatriste
2022-03-14T11:54:48Z
4
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-12T17:19:43Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: cptanalatriste/request-for-help results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # cptanalatriste/request-for-help This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1342 - Train Sparse Categorical Accuracy: 1.0 - Validation Loss: 0.1514 - Validation Sparse Categorical Accuracy: 0.9796 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.8291 | 0.375 | 0.7483 | 0.3673 | 0 | | 0.7470 | 0.375 | 0.6302 | 0.8163 | 1 | | 0.6504 | 0.625 | 0.6079 | 0.9184 | 2 | | 0.6128 | 0.7812 | 0.5882 | 0.8980 | 3 | | 0.5939 | 0.8125 | 0.5639 | 0.9184 | 4 | | 0.5300 | 0.9688 | 0.5378 | 0.9184 | 5 | | 0.5306 | 0.9688 | 0.5098 | 0.9388 | 6 | | 0.4963 | 1.0 | 0.4806 | 0.9388 | 7 | | 0.4683 | 0.9688 | 0.4434 | 0.9592 | 8 | | 0.3959 | 1.0 | 0.4070 | 0.9796 | 9 | | 0.3807 | 1.0 | 0.3762 | 0.9796 | 10 | | 0.3509 | 1.0 | 0.3439 | 0.9796 | 11 | | 0.3013 | 1.0 | 0.3064 | 0.9796 | 12 | | 0.2848 | 1.0 | 0.2931 | 0.9796 | 13 | | 0.2587 | 1.0 | 0.2681 | 0.9796 | 14 | | 0.2510 | 1.0 | 0.2295 | 0.9796 | 15 | | 0.1867 | 1.0 | 0.2000 | 0.9796 | 16 | | 0.1652 | 1.0 | 0.1793 | 0.9796 | 17 | | 0.1297 | 1.0 | 0.1637 | 0.9796 | 18 | | 0.1342 | 1.0 | 0.1514 | 0.9796 | 19 | ### Framework versions - Transformers 4.17.0 - TensorFlow 2.6.2 - Datasets 1.18.4 - Tokenizers 0.11.6
fenixobia/distilbert-base-uncased-finetuned-cola
fenixobia
2022-03-14T11:52:00Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-07T17:07:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5595884617444483 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7808 - Matthews Correlation: 0.5596 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.522 | 1.0 | 535 | 0.5361 | 0.4215 | | 0.3472 | 2.0 | 1070 | 0.5309 | 0.5046 | | 0.2342 | 3.0 | 1605 | 0.6451 | 0.5351 | | 0.1673 | 4.0 | 2140 | 0.7808 | 0.5596 | | 0.1249 | 5.0 | 2675 | 0.8750 | 0.5565 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.8.1 - Datasets 1.18.4 - Tokenizers 0.11.6
STSP/CT_Test
STSP
2022-03-14T11:31:46Z
15
0
tf-keras
[ "tf-keras", "keras", "image-classification", "region:us" ]
image-classification
2022-03-13T17:04:36Z
--- tags: - keras - image-classification ---
Kalaoke/embeddings_dense_model
Kalaoke
2022-03-14T09:54:04Z
119
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-03-14T09:53:55Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # Kalaoke/embeddings_dense_model This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 50 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('Kalaoke/embeddings_dense_model') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Kalaoke/embeddings_dense_model) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1050 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 315, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Asym( (topic-0): Dense({'in_features': 768, 'out_features': 50, 'bias': False, 'activation_function': 'torch.nn.modules.activation.Tanh'}) (title-0): Dense({'in_features': 768, 'out_features': 50, 'bias': False, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
robertou2/roberta-base-bne-finetuned-amazon_reviews_multi
robertou2
2022-03-14T09:17:59Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-14T08:34:50Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-amazon_reviews_multi results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metrics: - name: Accuracy type: accuracy value: 0.9325 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2368 - Accuracy: 0.9325 ## Model description Modelo de prueba del curso NLP de 0 a 100 sesion 4 ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1919 | 1.0 | 1250 | 0.1690 | 0.933 | | 0.0972 | 2.0 | 2500 | 0.2368 | 0.9325 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
lijingxin/distilbert-base-uncased-finetuned-clinc
lijingxin
2022-03-14T09:09:37Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-14T09:05:40Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9161290322580645 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7755 - Accuracy: 0.9161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2992 | 1.0 | 318 | 3.2969 | 0.7339 | | 2.6329 | 2.0 | 636 | 1.8817 | 0.8235 | | 1.5442 | 3.0 | 954 | 1.1561 | 0.8939 | | 1.0132 | 4.0 | 1272 | 0.8595 | 0.9103 | | 0.7953 | 5.0 | 1590 | 0.7755 | 0.9161 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2 - Datasets 1.16.1 - Tokenizers 0.10.3
z-uo/led-base-qasper
z-uo
2022-03-14T09:04:40Z
4
0
transformers
[ "transformers", "tensorboard", "question_answering", "en", "dataset:qasper", "endpoints_compatible", "region:us" ]
null
2022-03-11T18:27:48Z
--- language: en tags: - question_answering datasets: - qasper --- # led-base for QA with qasper A 10 epochs train of [Longformer Encoder Decoder Baselines for Qasper](https://github.com/allenai/qasper-led-baseline). ## How to use ``` git clone https://github.com/allenai/qasper-led-baseline.git cd qasper-led-baseline git clone https://huggingface.co/z-uo/led-base-qasper pip install -r requirements.txt # TODO test python scripts/sample_qasper_answers.py --model led-base-qasper --data qasper-dev-v0.2.json --samples 10 --out test_only.log ```
holtin/distilbert-base-uncased-holtin-finetuned-squad
holtin
2022-03-14T08:09:33Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-14T07:57:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-uncased-holtin-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-holtin-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 3.8541 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 84 | 4.4978 | | No log | 2.0 | 168 | 3.9588 | | No log | 3.0 | 252 | 3.8541 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
ComCom/skt_kogpt2-base-v2
ComCom
2022-03-14T07:37:27Z
5
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "ko", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-14T06:28:29Z
--- language: ko tags: - gpt2 license: cc-by-nc-sa-4.0 --- - This model forked from [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2). - You can use this model in [Teachable-NLP](https://ainize.ai/teachable-nlp). For more details: https://github.com/SKT-AI/KoGPT2
armytun/GoodFoodPicker
armytun
2022-03-14T07:20:09Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-14T07:20:09Z
--- license: apache-2.0 ---
BAHIJA/bert-base-uncased-finetuned-sst2
BAHIJA
2022-03-14T05:48:26Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-14T04:52:41Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.9346330275229358 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2745 - Accuracy: 0.9346 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1778 | 1.0 | 4210 | 0.3553 | 0.9060 | | 0.1257 | 2.0 | 8420 | 0.2745 | 0.9346 | | 0.0779 | 3.0 | 12630 | 0.3272 | 0.9300 | | 0.0655 | 4.0 | 16840 | 0.3412 | 0.9323 | | 0.0338 | 5.0 | 21050 | 0.3994 | 0.9300 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
katanaml/layoutlmv2-finetuned-cord
katanaml
2022-03-13T22:01:58Z
1,073
3
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "token-classification", "generated_from_trainer", "dataset:katanaml/cord", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-06T20:44:36Z
--- license: cc-by-nc-sa-4.0 datasets: - katanaml/cord tags: - generated_from_trainer model-index: - name: layoutlmv2-finetuned-cord results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-finetuned-cord This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on CORD dataset. ## Model description Model implementation code [Sparrow](https://github.com/katanaml/sparrow) ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
Taekyoon/komrc_train
Taekyoon
2022-03-13T15:11:14Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:korquad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-13T12:22:58Z
--- tags: - generated_from_trainer datasets: - korquad model-index: - name: komrc_train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # komrc_train This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the korquad dataset. It achieves the following results on the evaluation set: - Loss: 0.6544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1234 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.8187 | 0.31 | 2000 | 0.7377 | | 0.6947 | 0.63 | 4000 | 0.6934 | | 0.6352 | 0.94 | 6000 | 0.6544 | | 0.3869 | 1.25 | 8000 | 0.7633 | | 0.3812 | 1.56 | 10000 | 0.7047 | | 0.3579 | 1.88 | 12000 | 0.7097 | | 0.2053 | 2.19 | 14000 | 0.8511 | | 0.2173 | 2.5 | 16000 | 0.8457 | | 0.2094 | 2.82 | 18000 | 0.8433 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.10.3
Ramu/distilbert-base-uncased-finetuned-emotion
Ramu
2022-03-13T14:27:54Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-13T01:55:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9262005126757141 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2167 - Accuracy: 0.926 - F1: 0.9262 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8112 | 1.0 | 250 | 0.3147 | 0.903 | 0.8992 | | 0.2454 | 2.0 | 500 | 0.2167 | 0.926 | 0.9262 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.8.1+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
avorozhko/ruDialoGpt3-medium-finetuned-context
avorozhko
2022-03-13T11:41:17Z
8
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
## ะžะฟะธัะฐะฝะธะต ะผะพะดะตะปะธ ะญั‚ะพั‚ ั‡ะฐั‚ะฑะพั‚ - ะดะธะฟะปะพะผะฝะฐั ั€ะฐะฑะพั‚ะฐ ัั‚ัƒะดะตะฝั‚ะฐ ะะฝะดั€ะตั ะ’ะพั€ะพะถะบะพ ะฒ ะฃะ˜ะ˜ (ะฃะฝะธะฒะตั€ัะธั‚ะตั‚ ะ˜ัะบัƒััั‚ะฒะตะฝะฝะพะณะพ ะ˜ะฝั‚ะตะปะปะตะบั‚ะฐ). ะžะบะพะฝั‡ะฐะฝะธะต ะพะฑัƒั‡ะตะฝะธั - ะผะฐั€ั‚ 2022 ะณะพะดะฐ. ะงะฐั‚ะฑะพั‚ ัะดะตะปะฐะฝ ะฝะฐ ะพัะฝะพะฒะต ะผะพะดะตะปะธ [Kirili4ik/ruDialoGpt3-medium-finetuned-telegram](https://huggingface.co/Kirili4ik/ruDialoGpt3-medium-finetuned-telegram) ะขะตะฟะตั€ัŒ ะผะพะดะตะปัŒ ะดะพะพะฑัƒั‡ะตะฝะฐ ะฝะฐ ะพัะฝะพะฒะต 27000 ะฐะฝะตะบะดะพั‚ะพะฒ (14 ัะฟะพั…, ัะบะพั€ะพัั‚ัŒ ะพะฑัƒั‡ะตะฝะธั ะฒ ะบะพะปะฐะฑะต 2-6 ั‡ะฐัะพะฒ ะฝะฐ ัะฟะพั…ัƒ) ะธ ัƒะผะตะตั‚ ะฟะพะฝะธะผะฐั‚ัŒ ะบะพะฝั‚ะตะบัั‚ ั€ะฐะทะณะพะฒะพั€ะฐ. ะžะดะฝะฐะบะพ ะบะพะฝั‚ะตะบัั‚ ะฟั€ะธั…ะพะดะธั‚ัั ะพะณั€ะฐะฝะธั‡ะธะฒะฐั‚ัŒ ะฝะตัะบะพะปัŒะบะธะผะธ ะฟะพัะปะตะดะฝะธะผะธ ัะพะพะฑั‰ะตะฝะธัะผะธ ะฟะพั‚ะพะผัƒ ั‡ั‚ะพ ั‡ะตะผ ะฑะพะปัŒัˆะต ะบะพะฝั‚ะตะบัั‚ะฐ ั‚ะตะผ ะผะตะดะปะตะฝะฝะตะต ะผะพะดะตะปัŒ ั€ะฐะฑะพั‚ะฐะตั‚, ะฐ ะบะพะฝั‚ะตะบัั‚ ั€ะฐัั‚ะตั‚ ะบะฐะบ ัะฝะตะถะฝั‹ะน ะบะพะผ ะฒ ะฟั€ะพั†ะตััะต ั€ะฐะทะณะพะฒะพั€ะฐ. ะ˜ะฝั„ะตั€ะตะฝั ะฝะฐั…ะพะดะธั‚ัั ะฒ [spaces](https://huggingface.co/spaces/avorozhko/funbot): ะขะฐะผ ั ะฑะพั‚ะพะผ ะผะพะถะฝะพ ะฟะพะณะพะฒะพั€ะธั‚ัŒ. ะšะพะฝั‚ะตะบัั‚ ะพะณั€ะฐะฝะธั‡ะตะฝ 10 ะฟะพัะปะตะดะฝะธะผะธ ัะพะพะฑั‰ะตะฝะธัะผะธ. ะจัƒั‚ะบะธ ะฑะพั‚ ะฒั‹ะดะฐะตั‚, ะฝะพ ะฟะพะบะฐ ัะบะพั€ะตะต ัะปัƒั‡ะฐะนะฝะพ, ั‡ะตะผ ะฝะฐะผะตั€ะตะฝะฝะพ. ะžะดะฝะฐะบะพ ั€ะฐะทะณะพะฒะพั€ ะฟะพะดะดะตั€ะถะฐั‚ัŒ ัะฟะพัะพะฑะตะฝ ะธ ะดะฐะถะต ะฝะตะผะฝะพะณะพ ั€ะฐะทะฒะปะตั‡ัŒ. ะขะฐะบ ะบะฐะบ ัั‚ะพ ะณะตะฝะตั€ะฐั†ะธั ั‚ะตะบัั‚ะฐ, ั‚ะพ ะฝะฐ ะพะดะฝัƒ ะธ ั‚ัƒ ะถะต ั„ั€ะฐะทัƒ ะฑะพั‚ ะฒัะตะณะดะฐ ะฑัƒะดะตั‚ ะฒั‹ะดะฐะฒะฐั‚ัŒ ั€ะฐะทะฝั‹ะต ะพั‚ะฒะตั‚ั‹. ะขะฐะบะถะต ะดะปั ะพะฟั€ะตะดะตะปะตะฝะธั ะบะฐั‡ะตัั‚ะฒะฐ ะดะฐะฝะฝะพะน ะผะพะดะตะปะธ ะธัะฟะพะปัŒะทะพะฒะฐะปะฐััŒ ะบะฐัั‚ะพะผะฝะฐั ะผะตั‚ั€ะธะบะฐ - ัƒะณะปะพะฒะพะต ั€ะฐััั‚ะพัะฝะธั ะผะตะถะดัƒ ัะผะฑะตะดะดะธะฝะณะฐะผะธ y_train ะธ ะฟั€ะตะดะธะบั‚ะฐ. ะขะพ ะตัั‚ัŒ ะผั‹ ะฒะทัะปะธ ะฟะตั€ะฒั‹ะน ัะปะพะน ัะผะฑะตะดะดะธะฝะณะฐ ะผะพะดะตะปะธ ะธ ะฟั€ะพะณะพะฝัะปะธ ะฟั€ะตะดะธะบั‚ั‹ ะธ ะปะตะนะฑะปั‹, ะฟะพะปัƒั‡ะธะปะธ ะฒะตะบั‚ะพั€ะฐ ัะปะพะฒ. ะŸะพั‚ะพะผ ะฒะตะบั‚ะพั€ะฐ ัะปะพะฒ ััƒะผะผะธั€ะพะฒะฐะปะธ ะธ ะฟะพะปัƒั‡ะธะปะธ ะพะฑั‰ะธะต (ััƒะผะผะฐั€ะฝั‹ะต) ะฒะตะบั‚ะพั€ะฐ ะปะตะนะฑะปะพะฒ ะธ ะฟั€ะตะดะธะบั‚ะพะฒ. ะงะตะผ ะผะตะฝัŒัˆะต ัƒะณะพะป ะผะตะถะดัƒ ะฝะธะผะธ, ั‚ะตะผ ะปัƒั‡ัˆะต. ะŸั€ะธ ั€ะฐััั‡ะตั‚ะฐั… ะพั€ะธะตะฝั‚ะธั€ะพะฒะฐะปะธััŒ ะฝะฐ ะบะพัะธะฝัƒั ัั‚ะพะณะพ ัƒะณะปะฐ, ั‚ะฐะบ ะบะฐะบ cos 0 = 1, ั‚ะพ ัั‚ะพ ะพั‡ะตะฝัŒ ัƒะดะพะฑะฝะพ - ั‡ะตะผ ะฑะปะธะถะต ะฟะพะบะฐะทะฐั‚ะตะปัŒ ะบ 1, ั‚ะตะผ ะปัƒั‡ัˆะต. ะ’ะพั‚ ั‚ะฐะบะพะต ั€ะฐัะฟั€ะตะดะตะปะตะฝะธะต ัั‚ะธั… ะทะฝะฐั‡ะตะฝะธะน ะฟะพะปัƒั‡ะธะปะพััŒ ะฟะพ ัะฟะพั…ะฐะผ ะฝะฐ ะŸะ ะžะ’ะ•ะ ะžะงะะžะ™ ะฒั‹ะฑะพั€ะบะต (1406 ะฐะฝะตะบะดะพั‚ะพะฒ): ``` {1: tensor(0.9357, device='cuda:0', grad_fn=<DivBackward0>), 2: tensor(0.9390, device='cuda:0', grad_fn=<DivBackward0>), 3: tensor(0.9417, device='cuda:0', grad_fn=<DivBackward0>), 4: tensor(0.9439, device='cuda:0', grad_fn=<DivBackward0>), 5: tensor(0.9470, device='cuda:0', grad_fn=<DivBackward0>), 6: tensor(0.9537, device='cuda:0', grad_fn=<DivBackward0>), 7: tensor(0.9568, device='cuda:0', grad_fn=<DivBackward0>), 8: tensor(0.9592, device='cuda:0', grad_fn=<DivBackward0>), 9: tensor(0.9610, device='cuda:0', grad_fn=<DivBackward0>), 10: tensor(0.9622, device='cuda:0', grad_fn=<DivBackward0>), 11: tensor(0.9628, device='cuda:0', grad_fn=<DivBackward0>), 12: tensor(0.9632, device='cuda:0', grad_fn=<DivBackward0>), 13: tensor(0.9630, device='cuda:0', grad_fn=<DivBackward0>), 14: tensor(0.9634, device='cuda:0', grad_fn=<DivBackward0>), 15: tensor(0.9634, device='cuda:0', grad_fn=<DivBackward0>)} ``` ะ”ะปั ะธะฝั„ะตั€ะตะฝัะฐ ะฒั‹ะฑั€ะฐะฝะฐ 14-ั ัะฟะพั…ะฐ ั ั‚ะพั‡ะฝะพัั‚ัŒัŽ 0.9634. ะ”ะฐะปะตะต, ััƒะดั ะฟะพ ะฒัะตะผัƒ ะธะดะตั‚ ัƒะถะต ะฟะตั€ะตะพะฑัƒั‡ะตะฝะธะต.
cammy/bart-large-cnn-100-lit-evalMA-NOpad2
cammy
2022-03-13T11:11:08Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-13T10:56:35Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-100-lit-evalMA-NOpad2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-100-lit-evalMA-NOpad2 This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2126 - Rouge1: 25.6196 - Rouge2: 7.2753 - Rougel: 18.0987 - Rougelsum: 20.8416 - Gen Len: 67.3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 100 | 1.0890 | 23.5493 | 8.9875 | 17.1471 | 20.1643 | 67.8 | | No log | 2.0 | 200 | 1.2126 | 25.6196 | 7.2753 | 18.0987 | 20.8416 | 67.3 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
cammy/bart-large-cnn-1000-lit-evalMA-NOpad
cammy
2022-03-13T10:50:26Z
5
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-13T10:08:09Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-1000-lit-evalMA-NOpad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-1000-lit-evalMA-NOpad This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9804 - Rouge1: 27.2698 - Rouge2: 11.8561 - Rougel: 20.5948 - Rougelsum: 23.5497 - Gen Len: 67.67 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.5372 | 1.0 | 1000 | 1.7499 | 27.7275 | 12.7894 | 21.1334 | 24.4929 | 66.31 | | 0.7344 | 2.0 | 2000 | 1.9804 | 27.2698 | 11.8561 | 20.5948 | 23.5497 | 67.67 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
anasaqsme/distilbert-base-uncased-finetuned-squad
anasaqsme
2022-03-13T08:15:26Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
cammy/bart-large-cnn-weaksup-1000-NOpad-early
cammy
2022-03-13T05:51:27Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-13T05:36:31Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-weaksup-1000-NOpad-early results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-weaksup-1000-NOpad-early This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9082 - Rouge1: 26.9663 - Rouge2: 11.3027 - Rougel: 20.7327 - Rougelsum: 23.5965 - Gen Len: 67.19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.4775 | 1.0 | 1000 | 1.6796 | 27.208 | 12.01 | 20.8401 | 24.1333 | 66.06 | | 0.6972 | 2.0 | 2000 | 1.9082 | 26.9663 | 11.3027 | 20.7327 | 23.5965 | 67.19 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
cammy/bart-large-cnn-weaksup-100-NOpad-early
cammy
2022-03-13T05:24:09Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-13T05:23:53Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-weaksup-100-NOpad-early results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-weaksup-100-NOpad-early This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0768 - Rouge1: 28.7908 - Rouge2: 10.6989 - Rougel: 20.534 - Rougelsum: 24.1294 - Gen Len: 68.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 100 | 1.8905 | 31.1534 | 13.7074 | 21.6489 | 27.0709 | 64.2 | | No log | 2.0 | 200 | 2.0768 | 28.7908 | 10.6989 | 20.534 | 24.1294 | 68.5 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
khavitidala/xlmroberta-large-fine-tuned-indo-hoax-classification
khavitidala
2022-03-13T02:01:19Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "exbert", "multilingual", "arxiv:1911.02116", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-12T12:40:20Z
--- tags: - exbert language: multilingual inference: true license: mit --- # Fine-tuned version of XLM-RoBERTa (large-sized model) fine tune by Ryan Abdurohman # XLM-RoBERTa (large-sized model) XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr). Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlm-roberta) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2. ## Usage You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='xlm-roberta-large') >>> unmasker("Hello I'm a <mask> model.") [{'score': 0.10563907772302628, 'sequence': "Hello I'm a fashion model.", 'token': 54543, 'token_str': 'fashion'}, {'score': 0.08015287667512894, 'sequence': "Hello I'm a new model.", 'token': 3525, 'token_str': 'new'}, {'score': 0.033413201570510864, 'sequence': "Hello I'm a model model.", 'token': 3299, 'token_str': 'model'}, {'score': 0.030217764899134636, 'sequence': "Hello I'm a French model.", 'token': 92265, 'token_str': 'French'}, {'score': 0.026436051353812218, 'sequence': "Hello I'm a sexy model.", 'token': 17473, 'token_str': 'sexy'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large') model = AutoModelForMaskedLM.from_pretrained("xlm-roberta-large") # prepare input text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') # forward pass output = model(**encoded_input) ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1911-02116, author = {Alexis Conneau and Kartikay Khandelwal and Naman Goyal and Vishrav Chaudhary and Guillaume Wenzek and Francisco Guzm{\'{a}}n and Edouard Grave and Myle Ott and Luke Zettlemoyer and Veselin Stoyanov}, title = {Unsupervised Cross-lingual Representation Learning at Scale}, journal = {CoRR}, volume = {abs/1911.02116}, year = {2019}, url = {http://arxiv.org/abs/1911.02116}, eprinttype = {arXiv}, eprint = {1911.02116}, timestamp = {Mon, 11 Nov 2019 18:38:09 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=xlm-roberta-base"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
willcai/wav2vec2_common_voice_accents
willcai
2022-03-13T01:55:11Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-10T21:28:18Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2_common_voice_accents results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_common_voice_accents This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9095 - Wer: 0.4269 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.0135 | 5.33 | 400 | 1.3259 | 0.8067 | | 0.5608 | 10.67 | 800 | 0.7832 | 0.5024 | | 0.1441 | 16.0 | 1200 | 0.9309 | 0.4698 | | 0.0724 | 21.33 | 1600 | 0.9750 | 0.4461 | | 0.0444 | 26.67 | 2000 | 0.9095 | 0.4269 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.4 - Tokenizers 0.11.6
cammy/bart-large-cnn-weaksup-original-100k
cammy
2022-03-13T00:10:30Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-12T12:19:39Z
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-large-cnn-weaksup-original-100k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-weaksup-original-100k This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5931 - Rouge1: 30.4429 - Rouge2: 15.6691 - Rougel: 24.1975 - Rougelsum: 27.4761 - Gen Len: 68.4568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.261 | 1.0 | 100000 | 1.5931 | 30.4429 | 15.6691 | 24.1975 | 27.4761 | 68.4568 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
richielo/small-e-czech-finetuned-ner-wikiann
richielo
2022-03-12T20:18:42Z
12,031
2
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "token-classification", "generated_from_trainer", "dataset:wikiann", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-12T17:57:32Z
--- license: cc-by-4.0 tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: small-e-czech-finetuned-ner-wikiann results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann args: cs metrics: - name: Precision type: precision value: 0.8713322894683097 - name: Recall type: recall value: 0.8970423324922905 - name: F1 type: f1 value: 0.8840004144075699 - name: Accuracy type: accuracy value: 0.9557089381093997 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-e-czech-finetuned-ner-wikiann This model is a fine-tuned version of [Seznam/small-e-czech](https://huggingface.co/Seznam/small-e-czech) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2547 - Precision: 0.8713 - Recall: 0.8970 - F1: 0.8840 - Accuracy: 0.9557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2924 | 1.0 | 2500 | 0.2449 | 0.7686 | 0.8088 | 0.7882 | 0.9320 | | 0.2042 | 2.0 | 5000 | 0.2137 | 0.8050 | 0.8398 | 0.8220 | 0.9400 | | 0.1699 | 3.0 | 7500 | 0.1912 | 0.8236 | 0.8593 | 0.8411 | 0.9466 | | 0.1419 | 4.0 | 10000 | 0.1931 | 0.8349 | 0.8671 | 0.8507 | 0.9488 | | 0.1316 | 5.0 | 12500 | 0.1892 | 0.8470 | 0.8776 | 0.8620 | 0.9519 | | 0.1042 | 6.0 | 15000 | 0.2058 | 0.8433 | 0.8811 | 0.8618 | 0.9508 | | 0.0884 | 7.0 | 17500 | 0.2020 | 0.8602 | 0.8849 | 0.8724 | 0.9531 | | 0.0902 | 8.0 | 20000 | 0.2118 | 0.8551 | 0.8837 | 0.8692 | 0.9528 | | 0.0669 | 9.0 | 22500 | 0.2171 | 0.8634 | 0.8906 | 0.8768 | 0.9550 | | 0.0529 | 10.0 | 25000 | 0.2228 | 0.8638 | 0.8912 | 0.8773 | 0.9545 | | 0.0613 | 11.0 | 27500 | 0.2293 | 0.8626 | 0.8898 | 0.8760 | 0.9544 | | 0.0549 | 12.0 | 30000 | 0.2276 | 0.8694 | 0.8958 | 0.8824 | 0.9554 | | 0.0516 | 13.0 | 32500 | 0.2384 | 0.8717 | 0.8940 | 0.8827 | 0.9552 | | 0.0412 | 14.0 | 35000 | 0.2443 | 0.8701 | 0.8931 | 0.8815 | 0.9554 | | 0.0345 | 15.0 | 37500 | 0.2464 | 0.8723 | 0.8958 | 0.8839 | 0.9557 | | 0.0412 | 16.0 | 40000 | 0.2477 | 0.8705 | 0.8948 | 0.8825 | 0.9552 | | 0.0363 | 17.0 | 42500 | 0.2525 | 0.8742 | 0.8973 | 0.8856 | 0.9559 | | 0.0341 | 18.0 | 45000 | 0.2529 | 0.8727 | 0.8962 | 0.8843 | 0.9561 | | 0.0194 | 19.0 | 47500 | 0.2533 | 0.8699 | 0.8966 | 0.8830 | 0.9557 | | 0.0247 | 20.0 | 50000 | 0.2547 | 0.8713 | 0.8970 | 0.8840 | 0.9557 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
ABIINNOVATIONS/Filmstack
ABIINNOVATIONS
2022-03-12T18:53:50Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-12T18:53:50Z
--- license: apache-2.0 ---
rocca/informative-drawings-line-art-onnx
rocca
2022-03-12T17:59:37Z
0
0
null
[ "onnx", "region:us" ]
null
2022-03-12T17:52:02Z
All credit to this repo: https://huggingface.co/spaces/carolineec/informativedrawings JavaScript/browser demo here: https://github.com/josephrocca/image-to-line-art-js
Babygirl/Daddy
Babygirl
2022-03-12T17:48:58Z
0
1
null
[ "license:artistic-2.0", "region:us" ]
null
2022-03-12T17:48:58Z
--- license: artistic-2.0 ---
Sakil/Humanoid_robot
Sakil
2022-03-12T17:47:41Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-12T17:47:41Z
--- license: apache-2.0 ---
Taekyoon/neg_komrc_train
Taekyoon
2022-03-12T16:36:37Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: neg_komrc_train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # neg_komrc_train This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4016 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1234 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.277 | 0.51 | 10000 | 0.4016 | | 0.1671 | 1.03 | 20000 | 0.4116 | | 0.1725 | 1.54 | 30000 | 0.4390 | | 0.0868 | 2.06 | 40000 | 0.5147 | | 0.0868 | 2.57 | 50000 | 0.5064 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.10.3
StivenLancheros/Biobert-base-cased-v1.2-finetuned-ner-CRAFT_es_en
StivenLancheros
2022-03-12T11:40:00Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-11T20:09:49Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: Biobert-base-cased-v1.2-finetuned-ner-CRAFT_es_en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Biobert-base-cased-v1.2-finetuned-ner-CRAFT_es_en This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the CRAFT dataset. It achieves the following results on the evaluation set: - Loss: 0.1811 - Precision: 0.8555 - Recall: 0.8539 - F1: 0.8547 - Accuracy: 0.9706 ## Model description This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in Spanish and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.052 | 1.0 | 1360 | 0.1413 | 0.8300 | 0.8442 | 0.8370 | 0.9677 | | 0.0199 | 2.0 | 2720 | 0.1673 | 0.8461 | 0.8458 | 0.8459 | 0.9689 | | 0.011 | 3.0 | 4080 | 0.1647 | 0.8588 | 0.8528 | 0.8558 | 0.9704 | | 0.0031 | 4.0 | 5440 | 0.1811 | 0.8555 | 0.8539 | 0.8547 | 0.9706 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
Splend1dchan/deberta-large-slue-goldtrascription-e50
Splend1dchan
2022-03-12T10:30:29Z
2
0
transformers
[ "transformers", "pytorch", "deberta", "endpoints_compatible", "region:us" ]
null
2022-03-12T03:52:10Z
Deberta large trained on slue transcriptions for 50 epochs, lr = 5e-6
sanchit-gandhi/wav2vec2-2-rnd-2-layer-bart
sanchit-gandhi
2022-03-12T03:02:56Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-10T20:56:10Z
--- tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model was trained from scratch on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 4.6263 - Wer: 0.8568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 5.9849 | 1.68 | 1500 | 5.9623 | 1.1028 | | 5.1696 | 3.36 | 3000 | 5.5504 | 1.6345 | | 4.1412 | 5.04 | 4500 | 5.3853 | 1.3565 | | 2.7226 | 6.73 | 6000 | 5.3072 | 0.9908 | | 3.2607 | 8.41 | 7500 | 5.4121 | 1.2854 | | 2.4017 | 10.09 | 9000 | 5.1094 | 1.0303 | | 1.7361 | 11.77 | 10500 | 4.8928 | 0.9506 | | 2.0638 | 13.45 | 12000 | 4.8352 | 0.9127 | | 1.2832 | 15.13 | 13500 | 4.7271 | 0.9103 | | 1.0439 | 16.82 | 15000 | 4.5980 | 0.8720 | | 0.4112 | 18.5 | 16500 | 4.6263 | 0.8568 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
huggingtweets/thed3linquent_
huggingtweets
2022-03-11T22:57:28Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-11T22:57:19Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1502166273064517632/RdLwNuR6_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐Ÿค– AI BOT ๐Ÿค–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">rogueโ›“๐Ÿ•|| BIRFDAY BOY</div> <div style="text-align: center; font-size: 14px;">@thed3linquent_</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from rogueโ›“๐Ÿ•|| BIRFDAY BOY. | Data | rogueโ›“๐Ÿ•|| BIRFDAY BOY | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 334 | | Short tweets | 710 | | Tweets kept | 2202 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tal3g38/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thed3linquent_'s tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1aw76tml) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1aw76tml/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/thed3linquent_') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Ayham/albert_ernie_50beam_summarization_cnn_dailymail
Ayham
2022-03-11T21:58:56Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-11T14:33:54Z
--- tags: - generated_from_trainer datasets: - cnn_dailymail model-index: - name: albert_ernie_summarization_cnn_dailymail results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert_ernie_summarization_cnn_dailymail This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.10.3
GroNLP/wav2vec2-dutch-base
GroNLP
2022-03-11T16:04:18Z
58
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "speech", "nl", "endpoints_compatible", "region:us" ]
null
2022-03-11T15:43:01Z
--- language: nl tags: - speech --- # Wav2Vec2-Dutch-Base A Dutch Wav2Vec2 model. This model is created by further pre-training the original English [`facebook/wav2vec2-base`](https://huggingface.co/facebook/wav2vec2-base) model on Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/). This model is one of two Dutch Wav2Vec2 models: - [`GroNLP/wav2vec2-dutch-base`](https://huggingface.co/GroNLP/wav2vec2-dutch-base) (this model) - [`GroNLP/wav2vec2-dutch-large`](https://huggingface.co/GroNLP/wav2vec2-dutch-large)
GroNLP/wav2vec2-dutch-large
GroNLP
2022-03-11T16:04:07Z
14
2
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "speech", "nl", "endpoints_compatible", "region:us" ]
null
2022-03-11T15:41:51Z
--- language: nl tags: - speech --- # Wav2Vec2-Dutch-Large A Dutch Wav2Vec2 model. This model is created by further pre-training the original English [`facebook/wav2vec2-large`](https://huggingface.co/facebook/wav2vec2-large) model on Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/). This model is one of two Dutch Wav2Vec2 models: - [`GroNLP/wav2vec2-dutch-base`](https://huggingface.co/GroNLP/wav2vec2-dutch-base) - [`GroNLP/wav2vec2-dutch-large`](https://huggingface.co/GroNLP/wav2vec2-dutch-large) (this model)
anton-l/xtreme_s_xlsr_minds14_fr
anton-l
2022-03-11T13:39:16Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "automatic-speech-recognition", "google/xtreme_s", "generated_from_trainer", "dataset:xtreme_s", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-08T20:17:30Z
--- license: apache-2.0 tags: - automatic-speech-recognition - google/xtreme_s - generated_from_trainer datasets: - xtreme_s metrics: - accuracy model-index: - name: xtreme_s_xlsr_minds14_fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtreme_s_xlsr_minds14_fr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.FR-FR dataset. It achieves the following results on the evaluation set: - Loss: 0.3922 - Accuracy: 0.9135 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9751 | 10.0 | 50 | 2.0203 | 0.3462 | | 0.4275 | 20.0 | 100 | 0.7434 | 0.7981 | | 0.2484 | 30.0 | 150 | 0.7686 | 0.8462 | | 0.0263 | 40.0 | 200 | 0.3922 | 0.9135 | | 0.0118 | 50.0 | 250 | 0.4859 | 0.9038 | ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4.dev0 - Tokenizers 0.11.6
leftthomas/resnet50
leftthomas
2022-03-11T12:53:14Z
83
0
transformers
[ "transformers", "pytorch", "resnet", "image-classification", "custom_code", "dataset:imagenet", "arxiv:1512.03385", "license:afl-3.0", "autotrain_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - resnet license: afl-3.0 datasets: - imagenet widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # ResNet-50 Pretrained model on [ImageNet](http://www.image-net.org/). The ResNet architecture was introduced in [this paper](https://arxiv.org/abs/1512.03385). ## Intended uses You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head to fine-tune it on a downstream task (another classification task with different labels, image segmentation or object detection, to name a few). ## Evaluation results This model has a top1-accuracy of 76.13% and a top-5 accuracy of 92.86% in the evaluation set of ImageNet.
ratishsp/SeqPlan-RotoWire
ratishsp
2022-03-11T12:26:18Z
0
0
null
[ "arxiv:2202.13756", "region:us" ]
null
2022-03-11T12:20:13Z
This repo contains model for [Data-to-text Generation with Variational Sequential Planning](https://arxiv.org/abs/2202.13756) (Ratish Puduppully and Yao Fu and Mirella Lapata; In Transactions of the Association for Computational Linguistics (TACL)). This model is trained on the [RotoWire dataset](https://github.com/harvardnlp/boxscore-data). The code is available on github [repo](https://github.com/ratishsp/data2text-seq-plan-py). ## Citation ``` @article{puduppully-2021-seq-plan, author = {Ratish Puduppully and Yao Fu and Mirella Lapata}, title = {Data-to-text Generation with Variational Sequential Planning}, journal = {Transactions of the Association for Computational Linguistics (to appear)}, url = {https://arxiv.org/abs/2202.13756}, year = {2022} } ``` ## License The model is available under the MIT License.
ratishsp/SeqPlan-MLB
ratishsp
2022-03-11T12:08:06Z
0
0
null
[ "arxiv:2202.13756", "region:us" ]
null
2022-03-11T11:54:01Z
This repo contains model for [Data-to-text Generation with Variational Sequential Planning](https://arxiv.org/abs/2202.13756) (Ratish Puduppully and Yao Fu and Mirella Lapata; In Transactions of the Association for Computational Linguistics (TACL)). This model is trained on the [MLB dataset](https://huggingface.co/datasets/GEM/mlb_data_to_text). The code is available on github [repo](https://github.com/ratishsp/data2text-seq-plan-py). ## Citation ``` @article{puduppully-2021-seq-plan, author = {Ratish Puduppully and Yao Fu and Mirella Lapata}, title = {Data-to-text Generation with Variational Sequential Planning}, journal = {Transactions of the Association for Computational Linguistics (to appear)}, url = {https://arxiv.org/abs/2202.13756}, year = {2022} } ``` ## License The model is available under the MIT License.
cammy/bart-large-cnn-100k-lit-evalMA
cammy
2022-03-11T10:34:13Z
5
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-10T04:44:55Z
--- license: mit tags: - generated_from_trainer model-index: - name: bart-large-cnn-100k-lit-evalMA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-100k-lit-evalMA This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.7715 - eval_rouge1: 29.7037 - eval_rouge2: 15.0234 - eval_rougeL: 23.5169 - eval_rougeLsum: 26.8682 - eval_gen_len: 68.1209 - eval_runtime: 28898.0987 - eval_samples_per_second: 0.346 - eval_steps_per_second: 0.346 - epoch: 1.0 - step: 100000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
waboucay/french-camembert-postag-model-finetuned-perceo
waboucay
2022-03-11T09:37:32Z
4
2
transformers
[ "transformers", "pytorch", "camembert", "token-classification", "pos-tagging", "fr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - fr tags: - pos-tagging --- ## Eval results We obtain the following results on ```validation``` and ```test``` sets: | Set | F1<sub>micro</sub> | F1<sub>macro</sub> | |------------|--------------------|--------------------| | validation | 98.2 | 93.2 | | test | 97.7 | 87.4 |
everdoubling/byt5-Korean-large
everdoubling
2022-03-11T09:16:25Z
10
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "dataset:mc4", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-04T09:03:25Z
--- datasets: - mc4 license: apache-2.0 --- # ByT5-Korean - large ByT5-Korean is a Korean specific extension of Google's [ByT5](https://github.com/google-research/byt5). A Korean syllable has three components (called Jamo): a beginning consonant, a middle vowel, and an optional final consonant; they are like individual characters of alphabet. While the ByT5's utf-8 encoding allows generic encoding for multiple languages, it is unnatural for Korean because it splits the bits representation of each Jamo in the middle. ByT5-Korean extends ByT5's utf-8 encoding with special care for Korean syllables; each Jamo is represented with a extra token. ByT5-Korean was pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) with 70% Korean and 30% English. ## Encoding Scheme ```text id: token 0: <pad> 1: <eos> 2: <unk> 3~258: utf-8 encoding 259~277: beginning consonants(์ดˆ์„ฑ), 19๊ฐœ(ใ„ฑใ„ฒใ„ดใ„ทใ„ธใ„นใ…ใ…‚ใ…ƒใ……ใ…†ใ…‡ใ…ˆใ…‰ใ…Šใ…‹ใ…Œใ…ใ…Ž) 278~298: middle vowel(์ค‘์„ฑ), 21๊ฐœ(ใ…ใ…ใ…‘ใ…’ใ…“ใ…”ใ…•ใ…–ใ…—ใ…˜ใ…™ใ…šใ…›ใ…œใ…ใ…žใ…Ÿใ… ใ…กใ…ขใ…ฃ) 299~326: final consonant(์ข…์„ฑ), ๋ฌด์ข…์„ฑ+27๊ฐœ(ใ„ฑใ„ฒใ„ณใ„ดใ„ตใ„ถใ„ทใ„นใ„บใ„ปใ„ผใ„ฝใ„พใ„ฟใ…€ใ…ใ…‚ใ…„ใ……ใ…†ใ…‡ใ…ˆใ…Šใ…‹ใ…Œใ…ใ…Ž) 327~384: from <extra_id_0> to <extra_id_57> ``` ## Example Inference ```python import torch from tokenizer import ByT5KoreanTokenizer # https://huggingface.co/everdoubling/byt5-Korean-large/blob/main/tokenizer.py from transformers import T5ForConditionalGeneration tokenizer_jamo = ByT5KoreanTokenizer() model = T5ForConditionalGeneration.from_pretrained('everdoubling/byt5-Korean-large') input_sentence = 'ํ•œ๊ตญ์–ด ์œ„ํ‚ค๋ฐฑ๊ณผ(์˜์–ด: Korean Wikipedia)๋Š” ํ•œ๊ตญ์–ด๋กœ ์šด์˜๋˜๋Š” ์œ„ํ‚ค๋ฐฑ๊ณผ์˜ ๋‹ค์–ธ์–ดํŒ ๊ฐ€์šด๋ฐ ํ•˜๋‚˜๋กœ์„œ, 2002๋…„ 10์›” 11์ผ์— <extra_id_0>. ๋˜ํ•œ ํ˜„์žฌ ํ•œ๊ตญ์–ด ์œ„ํ‚ค๋ฐฑ๊ณผ์—๋Š” ๋„˜๊ฒจ์ฃผ๊ธฐ, ํ† ๋ก , ๊ทธ๋ฆผ ๋“ฑ ํŽ˜์ด์ง€๋กœ ๋ถˆ๋ฆฌ๋Š” ๋ชจ๋“  ๋ฌธ์„œ๋ฅผ ํฌํ•จํ•˜๋ฉด ์ด 2,629,860๊ฐœ๊ฐ€ <extra_id_1>๋˜์–ด ์žˆ์œผ๋ฉฐ, ๋„˜๊ฒจ์ฃผ๊ธฐ๋ฅผ ํฌํ•จํ•œ ์ผ๋ฐ˜ ๋ฌธ์„œ ์ˆ˜๋Š” 1,278,560๊ฐœ,[1] ๊ทธ์ค‘ ๋„˜๊ฒจ์ฃผ๊ธฐ, ๋ง‰๋‹ค๋ฅธ ๋ฌธ์„œ๋ฅผ ์ œ์™ธํ•œ ์ผ๋ฐ˜ ๋ฌธ์„œ ์ˆ˜๋Š” 573,149๊ฐœ์ด๋‹ค.' input_ids_jamo = tokenizer_jamo(input_sentence).input_ids outputs_jamo = model_jamo.generate(torch.tensor([input_ids_jamo])) print(tokenizer_jamo.decode(outputs_jamo[0])) # <pad><extra_id_0>์„ค๋ฆฝ๋˜์—ˆ๋‹ค<extra_id_1>ฤ‘ฤ› ``` Additional information coming soon...