pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-classification
|
transformers
|
# About this model: Topical Change Detection in Documents
This network has been fine-tuned for the task described in the paper *Topical Change Detection in Documents via Embeddings of Long Sequences* and is our best-performing base-transformer model. You can find more detailed information in our GitHub page for the paper [here](https://github.com/dennlinger/TopicalChange), or read the [paper itself](https://arxiv.org/abs/2012.03619). The weights are based on RoBERTa-base.
# Load the model
The preferred way is through pipelines
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="dennlinger/roberta-cls-consec")
pipe("{First paragraph} [SEP] {Second paragraph}")
```
# Input Format
The model expects two segments that are separated with the `[SEP]` token. In our training setup, we had entire paragraphs as samples (or up to 512 tokens across two paragraphs), specifically trained on a Terms of Service data set. Note that this might lead to poor performance on "general" topics, such as news articles or Wikipedia.
# Training objective
The training task is to determine whether two text segments (paragraphs) belong to the same topical section or not. This can be utilized to create a topical segmentation of a document by consecutively predicting the "coherence" of two segments.
If you are experimenting via the Huggingface Model API, the following are interpretations of the `LABEL`s:
* `LABEL_0`: Two input segments separated by `[SEP]` do *not* belong to the same topic.
* `LABEL_1`: Two input segments separated by `[SEP]` do belong to the same topic.
# Performance
The results of this model can be found in the paper. We average over models from five different random seeds, which is why the specific results for this model might be different from the exact values in the paper.
Note that this model is *not* trained to work on classifying single texts, but only works with two (separated) inputs.
|
{}
|
dennlinger/roberta-cls-consec
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"text-classification",
"arxiv:2012.03619",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2012.03619"
] |
[] |
TAGS
#transformers #pytorch #jax #safetensors #roberta #text-classification #arxiv-2012.03619 #autotrain_compatible #endpoints_compatible #region-us
|
# About this model: Topical Change Detection in Documents
This network has been fine-tuned for the task described in the paper *Topical Change Detection in Documents via Embeddings of Long Sequences* and is our best-performing base-transformer model. You can find more detailed information in our GitHub page for the paper here, or read the paper itself. The weights are based on RoBERTa-base.
# Load the model
The preferred way is through pipelines
# Input Format
The model expects two segments that are separated with the '[SEP]' token. In our training setup, we had entire paragraphs as samples (or up to 512 tokens across two paragraphs), specifically trained on a Terms of Service data set. Note that this might lead to poor performance on "general" topics, such as news articles or Wikipedia.
# Training objective
The training task is to determine whether two text segments (paragraphs) belong to the same topical section or not. This can be utilized to create a topical segmentation of a document by consecutively predicting the "coherence" of two segments.
If you are experimenting via the Huggingface Model API, the following are interpretations of the 'LABEL's:
* 'LABEL_0': Two input segments separated by '[SEP]' do *not* belong to the same topic.
* 'LABEL_1': Two input segments separated by '[SEP]' do belong to the same topic.
# Performance
The results of this model can be found in the paper. We average over models from five different random seeds, which is why the specific results for this model might be different from the exact values in the paper.
Note that this model is *not* trained to work on classifying single texts, but only works with two (separated) inputs.
|
[
"# About this model: Topical Change Detection in Documents\nThis network has been fine-tuned for the task described in the paper *Topical Change Detection in Documents via Embeddings of Long Sequences* and is our best-performing base-transformer model. You can find more detailed information in our GitHub page for the paper here, or read the paper itself. The weights are based on RoBERTa-base.",
"# Load the model\nThe preferred way is through pipelines",
"# Input Format\nThe model expects two segments that are separated with the '[SEP]' token. In our training setup, we had entire paragraphs as samples (or up to 512 tokens across two paragraphs), specifically trained on a Terms of Service data set. Note that this might lead to poor performance on \"general\" topics, such as news articles or Wikipedia.",
"# Training objective\nThe training task is to determine whether two text segments (paragraphs) belong to the same topical section or not. This can be utilized to create a topical segmentation of a document by consecutively predicting the \"coherence\" of two segments. \nIf you are experimenting via the Huggingface Model API, the following are interpretations of the 'LABEL's:\n* 'LABEL_0': Two input segments separated by '[SEP]' do *not* belong to the same topic.\n* 'LABEL_1': Two input segments separated by '[SEP]' do belong to the same topic.",
"# Performance\nThe results of this model can be found in the paper. We average over models from five different random seeds, which is why the specific results for this model might be different from the exact values in the paper.\n\nNote that this model is *not* trained to work on classifying single texts, but only works with two (separated) inputs."
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #roberta #text-classification #arxiv-2012.03619 #autotrain_compatible #endpoints_compatible #region-us \n",
"# About this model: Topical Change Detection in Documents\nThis network has been fine-tuned for the task described in the paper *Topical Change Detection in Documents via Embeddings of Long Sequences* and is our best-performing base-transformer model. You can find more detailed information in our GitHub page for the paper here, or read the paper itself. The weights are based on RoBERTa-base.",
"# Load the model\nThe preferred way is through pipelines",
"# Input Format\nThe model expects two segments that are separated with the '[SEP]' token. In our training setup, we had entire paragraphs as samples (or up to 512 tokens across two paragraphs), specifically trained on a Terms of Service data set. Note that this might lead to poor performance on \"general\" topics, such as news articles or Wikipedia.",
"# Training objective\nThe training task is to determine whether two text segments (paragraphs) belong to the same topical section or not. This can be utilized to create a topical segmentation of a document by consecutively predicting the \"coherence\" of two segments. \nIf you are experimenting via the Huggingface Model API, the following are interpretations of the 'LABEL's:\n* 'LABEL_0': Two input segments separated by '[SEP]' do *not* belong to the same topic.\n* 'LABEL_1': Two input segments separated by '[SEP]' do belong to the same topic.",
"# Performance\nThe results of this model can be found in the paper. We average over models from five different random seeds, which is why the specific results for this model might be different from the exact values in the paper.\n\nNote that this model is *not* trained to work on classifying single texts, but only works with two (separated) inputs."
] |
question-answering
|
transformers
|
# Bilingual English + German SQuAD2.0
We created German Squad 2.0 (**deQuAD 2.0**) and merged with [**SQuAD2.0**](https://rajpurkar.github.io/SQuAD-explorer/) into an English and German training data for question answering. The [**bert-base-multilingual-cased**](https://github.com/google-research/bert/blob/master/multilingual.md) is used to fine-tune bilingual QA downstream task.
## Details of deQuAD 2.0
[**SQuAD2.0**](https://rajpurkar.github.io/SQuAD-explorer/) was auto-translated into German. We hired professional editors to proofread the translated transcripts, correct mistakes and double check the answers to further polish the text and enhance annotation quality. The final German deQuAD dataset contains **130k** training and **11k** test samples.
## Overview
- **Language model:** bert-base-multilingual-cased
- **Language:** German, English
- **Training data:** deQuAD2.0 + SQuAD2.0 training set
- **Evaluation data:** SQuAD2.0 test set; deQuAD2.0 test set
- **Infrastructure:** 8xV100 GPU
- **Published**: July 9th, 2021
## Evaluation on English SQuAD2.0
```
HasAns_exact = 85.79622132253711
HasAns_f1 = 90.92004586077663
HasAns_total = 5928
NoAns_exact = 94.76871320437343
NoAns_f1 = 94.76871320437343
NoAns_total = 5945
exact = 90.28889076054915
f1 = 92.84713483219753
total = 11873
```
## Evaluation on German deQuAD2.0
```
HasAns_exact = 63.80526406330638
HasAns_f1 = 72.47269140789888
HasAns_total = 5813
NoAns_exact = 82.0291893792861
NoAns_f1 = 82.0291893792861
NoAns_total = 5687
exact = 72.81739130434782
f1 = 77.19858740470603
total = 11500
```
## Use Model in Pipeline
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="deutsche-telekom/bert-multi-english-german-squad2",
tokenizer="deutsche-telekom/bert-multi-english-german-squad2"
)
contexts = ["Die Allianz Arena ist ein Fußballstadion im Norden von München und bietet bei Bundesligaspielen 75.021 Plätze, zusammengesetzt aus 57.343 Sitzplätzen, 13.794 Stehplätzen, 1.374 Logenplätzen, 2.152 Business Seats und 966 Sponsorenplätzen. In der Allianz Arena bestreitet der FC Bayern München seit der Saison 2005/06 seine Heimspiele. Bis zum Saisonende 2017 war die Allianz Arena auch Spielstätte des TSV 1860 München.",
"Harvard is a large, highly residential research university. It operates several arts, cultural, and scientific museums, alongside the Harvard Library, which is the world's largest academic and private library system, comprising 79 individual libraries with over 18 million volumes. "]
questions = ["Wo befindet sich die Allianz Arena?",
"What is the worlds largest academic and private library system?"]
qa_pipeline(context=contexts, question=questions)
```
# Output:
```json
[{'score': 0.7290093898773193,
'start': 44,
'end': 62,
'answer': 'Norden von München'},
{'score': 0.7979822754859924,
'start': 134,
'end': 149,
'answer': 'Harvard Library'}]
```
## License - The MIT License
Copyright (c) 2021 Fang Xu, Deutsche Telekom AG
|
{"language": ["de", "en", "multilingual"], "license": "mit", "tags": ["english", "german"]}
|
deutsche-telekom/bert-multi-english-german-squad2
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"english",
"german",
"de",
"en",
"multilingual",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de",
"en",
"multilingual"
] |
TAGS
#transformers #pytorch #safetensors #bert #question-answering #english #german #de #en #multilingual #license-mit #endpoints_compatible #has_space #region-us
|
# Bilingual English + German SQuAD2.0
We created German Squad 2.0 (deQuAD 2.0) and merged with SQuAD2.0 into an English and German training data for question answering. The bert-base-multilingual-cased is used to fine-tune bilingual QA downstream task.
## Details of deQuAD 2.0
SQuAD2.0 was auto-translated into German. We hired professional editors to proofread the translated transcripts, correct mistakes and double check the answers to further polish the text and enhance annotation quality. The final German deQuAD dataset contains 130k training and 11k test samples.
## Overview
- Language model: bert-base-multilingual-cased
- Language: German, English
- Training data: deQuAD2.0 + SQuAD2.0 training set
- Evaluation data: SQuAD2.0 test set; deQuAD2.0 test set
- Infrastructure: 8xV100 GPU
- Published: July 9th, 2021
## Evaluation on English SQuAD2.0
## Evaluation on German deQuAD2.0
## Use Model in Pipeline
# Output:
## License - The MIT License
Copyright (c) 2021 Fang Xu, Deutsche Telekom AG
|
[
"# Bilingual English + German SQuAD2.0\n\nWe created German Squad 2.0 (deQuAD 2.0) and merged with SQuAD2.0 into an English and German training data for question answering. The bert-base-multilingual-cased is used to fine-tune bilingual QA downstream task.",
"## Details of deQuAD 2.0\nSQuAD2.0 was auto-translated into German. We hired professional editors to proofread the translated transcripts, correct mistakes and double check the answers to further polish the text and enhance annotation quality. The final German deQuAD dataset contains 130k training and 11k test samples.",
"## Overview\n- Language model: bert-base-multilingual-cased \n- Language: German, English \n- Training data: deQuAD2.0 + SQuAD2.0 training set \n- Evaluation data: SQuAD2.0 test set; deQuAD2.0 test set\n- Infrastructure: 8xV100 GPU \n- Published: July 9th, 2021",
"## Evaluation on English SQuAD2.0",
"## Evaluation on German deQuAD2.0",
"## Use Model in Pipeline",
"# Output:",
"## License - The MIT License\nCopyright (c) 2021 Fang Xu, Deutsche Telekom AG"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #question-answering #english #german #de #en #multilingual #license-mit #endpoints_compatible #has_space #region-us \n",
"# Bilingual English + German SQuAD2.0\n\nWe created German Squad 2.0 (deQuAD 2.0) and merged with SQuAD2.0 into an English and German training data for question answering. The bert-base-multilingual-cased is used to fine-tune bilingual QA downstream task.",
"## Details of deQuAD 2.0\nSQuAD2.0 was auto-translated into German. We hired professional editors to proofread the translated transcripts, correct mistakes and double check the answers to further polish the text and enhance annotation quality. The final German deQuAD dataset contains 130k training and 11k test samples.",
"## Overview\n- Language model: bert-base-multilingual-cased \n- Language: German, English \n- Training data: deQuAD2.0 + SQuAD2.0 training set \n- Evaluation data: SQuAD2.0 test set; deQuAD2.0 test set\n- Infrastructure: 8xV100 GPU \n- Published: July 9th, 2021",
"## Evaluation on English SQuAD2.0",
"## Evaluation on German deQuAD2.0",
"## Use Model in Pipeline",
"# Output:",
"## License - The MIT License\nCopyright (c) 2021 Fang Xu, Deutsche Telekom AG"
] |
question-answering
|
transformers
|
We released the German Question Answering model fine-tuned with our own German Question Answering dataset (**deQuAD**) containing **130k** training and **11k** test QA pairs.
## Overview
- **Language model:** [electra-base-german-uncased](https://huggingface.co/german-nlp-group/electra-base-german-uncased)
- **Language:** German
- **Training data:** deQuAD2.0 training set (~42MB)
- **Evaluation data:** deQuAD2.0 test set (~4MB)
- **Infrastructure:** 8xV100 GPU
## Evaluation
We benchmarked the question answering performance on our deQuAD test data with some German language models. The fine-tuned electra-base-german-uncased model gives the best performance (Exact Match/F1).
| Model | All | HasAns | NoAns |
|-------|--------|--------|--------|
| electra-base-german-uncased | 70.97/76.18 | 67.73/78.02 | 74.29/74.29 |
| bert-base-german-cased |58.98/64.77| 49.19/60.63| 69.03/69.03|
|bert-base-german-dbmdz-uncased|63.70/68.00| 57.03/65.52| 70.51/70.51 |
|dbmdz/bert-base-german-europeana-uncased| 58.79/63.38| 52.14/61.22| 65.59/65.59|
## Use Model in Pipeline
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="deutsche-telekom/electra-base-de-squad2",
tokenizer="deutsche-telekom/electra-base-de-squad2"
)
contexts = ['''Die Robert Bosch GmbH ist ein im Jahr 1886 von Robert Bosch gegründetes multinationales deutsches Unternehmen.
Es ist tätig als Automobilzulieferer, Hersteller von Gebrauchsgütern und Industrie- und Gebäudetechnik und darüber hinaus
in der automatisierten Verpackungstechnik, wo Bosch den führenden Platz einnimmt. Die Robert Bosch GmbH und ihre rund 460
Tochter- und Regionalgesellschaften in mehr als 60 Ländern bilden die Bosch-Gruppe. Der Sitz der Geschäftsführung befindet
sich auf der Schillerhöhe in Gerlingen, der Firmensitz in Stuttgart. Seit dem 1. Juli 2012 ist Volkmar Denner Vorsitzender
der Geschäftsführung. Im Jahr 2015 konnte Bosch die Spitzenposition zurückgewinnen. Die Automobilsparte war im Jahr 2018
für 61 % des Konzernumsatzes von Bosch verantwortlich. Das Unternehmen hatte im Jahr 2018 in Deutschland an 85 Standorten
139.400 Mitarbeiter.''']*2
questions = ["Wer leitet die Robert Bosch GmbH?",
"Wer begründete die Robert Bosch GmbH?"]
qa_pipeline(context=contexts, question=questions)
```
## Output
```json
[{'score': 0.9537325501441956,
'start': 577,
'end': 591,
'answer': 'Volkmar Denner'},
{'score': 0.8804352879524231,
'start': 47,
'end': 59,
'answer': 'Robert Bosch'}]
```
## License - The MIT License
Copyright (c) 2021 Fang Xu, Deutsche Telekom AG
|
{"language": "de", "license": "mit", "tags": ["german"]}
|
deutsche-telekom/electra-base-de-squad2
| null |
[
"transformers",
"pytorch",
"safetensors",
"electra",
"question-answering",
"german",
"de",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #safetensors #electra #question-answering #german #de #license-mit #endpoints_compatible #region-us
|
We released the German Question Answering model fine-tuned with our own German Question Answering dataset (deQuAD) containing 130k training and 11k test QA pairs.
Overview
--------
* Language model: electra-base-german-uncased
* Language: German
* Training data: deQuAD2.0 training set (~42MB)
* Evaluation data: deQuAD2.0 test set (~4MB)
* Infrastructure: 8xV100 GPU
Evaluation
----------
We benchmarked the question answering performance on our deQuAD test data with some German language models. The fine-tuned electra-base-german-uncased model gives the best performance (Exact Match/F1).
Use Model in Pipeline
---------------------
Output
------
License - The MIT License
-------------------------
Copyright (c) 2021 Fang Xu, Deutsche Telekom AG
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #electra #question-answering #german #de #license-mit #endpoints_compatible #region-us \n"
] |
summarization
|
transformers
|
# mT5-small-sum-de-en-v1
This is a bilingual summarization model for English and German. It is based on the multilingual T5 model [google/mt5-small](https://huggingface.co/google/mt5-small).
[](https://www.welove.ai/)
This model is provided by the [One Conversation](https://www.welove.ai/)
team of [Deutsche Telekom AG](https://www.telekom.com/).
## Training
The training was conducted with the following hyperparameters:
- base model: [google/mt5-small](https://huggingface.co/google/mt5-small)
- source_prefix: `"summarize: "`
- batch size: 3
- max_source_length: 800
- max_target_length: 96
- warmup_ratio: 0.3
- number of train epochs: 10
- gradient accumulation steps: 2
- learning rate: 5e-5
## Datasets and Preprocessing
The datasets were preprocessed as follows:
The summary was tokenized with the [google/mt5-small](https://huggingface.co/google/mt5-small) tokenizer. Then only the records with no more than 94 summary tokens were selected.
The MLSUM dataset has a special characteristic. In the text, the summary is often included completely as one or more sentences. These have been removed from the texts. The reason is that we do not want to train a model that ultimately extracts only sentences as a summary.
This model is trained on the following datasets:
| Name | Language | Size | License
|------|----------|------|--------
| [CNN Daily - Train](https://github.com/abisee/cnn-dailymail) | en | 218,223 | The license is unclear. The data comes from CNN and Daily Mail. We assume that it may only be used for research purposes and not commercially.
| [Extreme Summarization (XSum) - Train](https://github.com/EdinburghNLP/XSum) | en | 204,005 | The license is unclear. The data comes from BBC. We assume that it may only be used for research purposes and not commercially.
| [wiki_lingua English](https://github.com/esdurmus/Wikilingua) | en | 130,331 | [Creative Commons CC BY-NC-SA 3.0 License](https://www.wikihow.com/wikiHow:Terms-of-Use)
| [wiki_lingua German](https://github.com/esdurmus/Wikilingua) | de | 48,390 | [Creative Commons CC BY-NC-SA 3.0 License](https://www.wikihow.com/wikiHow:Terms-of-Use)
| [MLSUM German - Train](https://github.com/ThomasScialom/MLSUM) | de | 218,043 | Usage of dataset is restricted to non-commercial research purposes only. Copyright belongs to the original copyright holders (see [here](https://github.com/ThomasScialom/MLSUM#mlsum)).
| [SwissText 2019 - Train](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html) | de | 84,564 | The license is unclear. The data was published in the [German Text Summarization Challenge](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html). We assume that they may be used for research purposes and not commercially.
| Language | Size
|------|------
| German | 350,997
| English | 552,559
| Total | 903,556
## Evaluation on MLSUM German Test Set (no beams)
| Model | rouge1 | rouge2 | rougeL | rougeLsum
|-------|--------|--------|--------|----------
| [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946
| **deutsche-telekom/mT5-small-sum-de-en-01 (this)** | **21.7336** | **7.2614** | **17.1323** | **19.3977**
## Evaluation on CNN Daily English Test Set (no beams)
| Model | rouge1 | rouge2 | rougeL | rougeLsum
|-------|--------|--------|--------|----------
| [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 26.7664 | 8.8243 | 18.3703 | 23.2614
| [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364
| [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 37.576 | 14.7389 | 24.0254 | 34.4634
| **deutsche-telekom/mT5-small-sum-de-en-01 (this)** | **37.6339** | **16.5317** | **27.1418** | **34.9951**
## Evaluation on Extreme Summarization (XSum) English Test Set (no beams)
| Model | rouge1 | rouge2 | rougeL | rougeLsum
|-------|--------|--------|--------|----------
| [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 18.6204 | 3.535 | 12.3997 | 15.2111
| [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364
| deutsche-telekom/mT5-small-sum-de-en-01 (this) | 32.3416 | 10.6191 | 25.3799 | 25.3908
| [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 44.2553 ♣ | 21.4289 ♣ | 36.2639 ♣ | 36.2696 ♣
♣: These values seem to be unusually high. It could be that the test set was used in the training data.
## License
Copyright (c) 2021 Philip May, Deutsche Telekom AG
This work is licensed under the [Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)](https://creativecommons.org/licenses/by-nc-sa/3.0/) license.
|
{"language": ["de", "en", "multilingual"], "license": "cc-by-nc-sa-4.0", "tags": ["summarization"], "datasets": ["cnn_dailymail", "xsum", "wiki_lingua", "mlsum", "swiss_text_2019"]}
|
deutsche-telekom/mt5-small-sum-de-en-v1
| null |
[
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"de",
"en",
"multilingual",
"dataset:cnn_dailymail",
"dataset:xsum",
"dataset:wiki_lingua",
"dataset:mlsum",
"dataset:swiss_text_2019",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de",
"en",
"multilingual"
] |
TAGS
#transformers #pytorch #safetensors #mt5 #text2text-generation #summarization #de #en #multilingual #dataset-cnn_dailymail #dataset-xsum #dataset-wiki_lingua #dataset-mlsum #dataset-swiss_text_2019 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mT5-small-sum-de-en-v1
======================
This is a bilingual summarization model for English and German. It is based on the multilingual T5 model google/mt5-small.

----------------------------------------------
Evaluation on CNN Daily English Test Set (no beams)
---------------------------------------------------
Evaluation on Extreme Summarization (XSum) English Test Set (no beams)
----------------------------------------------------------------------
♣: These values seem to be unusually high. It could be that the test set was used in the training data.
License
-------
Copyright (c) 2021 Philip May, Deutsche Telekom AG
This work is licensed under the Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0) license.
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #mt5 #text2text-generation #summarization #de #en #multilingual #dataset-cnn_dailymail #dataset-xsum #dataset-wiki_lingua #dataset-mlsum #dataset-swiss_text_2019 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
summarization
|
transformers
|
# mT5-small-sum-de-mit-v1
This is a German summarization model. It is based on the multilingual T5 model [google/mt5-small](https://huggingface.co/google/mt5-small). The special characteristic of this model is that, unlike many other models, it is licensed under a permissive open source license (MIT). Among other things, this license allows commercial use.
[](https://www.welove.ai/)
This model is provided by the [One Conversation](https://www.welove.ai/)
team of [Deutsche Telekom AG](https://www.telekom.com/).
## Training
The training was conducted with the following hyperparameters:
- base model: [google/mt5-small](https://huggingface.co/google/mt5-small)
- source_prefix: `"summarize: "`
- batch size: 3 (6)
- max_source_length: 800
- max_target_length: 96
- warmup_ratio: 0.3
- number of train epochs: 10
- gradient accumulation steps: 2
- learning rate: 5e-5
## Datasets and Preprocessing
The datasets were preprocessed as follows:
The summary was tokenized with the [google/mt5-small](https://huggingface.co/google/mt5-small) tokenizer. Then only the records with no more than 94 summary tokens were selected.
This model is trained on the following dataset:
| Name | Language | Size | License
|------|----------|------|--------
| [SwissText 2019 - Train](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html) | de | 84,564 | Concrete license is unclear. The data was published in the [German Text Summarization Challenge](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html).
We have permission to use the Swisstext dataset and release the resulting summarization model under MIT license (see [permission-declaration-swisstext.pdf](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-mit-v1/resolve/main/permission-declaration-swisstext.pdf)).
## Evaluation on MLSUM German Test Set (no beams)
| Model | rouge1 | rouge2 | rougeL | rougeLsum
|-------|--------|--------|--------|----------
| deutsche-telekom/mt5-small-sum-de-mit-v1 (this) | 16.8023 | 3.5531 | 12.6884 | 14.7624
| [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946
| **[deutsche-telekom/mt5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1)** | **21.7336** | **7.2614** | **17.1323** | **19.3977**
## License
Copyright (c) 2021 Philip May, Deutsche Telekom AG
Licensed under the MIT License (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file [LICENSE](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-mit-v1/blob/main/LICENSE) in the repository.
|
{"language": ["de"], "license": "mit", "tags": ["summarization"], "datasets": ["swiss_text_2019"]}
|
deutsche-telekom/mt5-small-sum-de-mit-v1
| null |
[
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"de",
"dataset:swiss_text_2019",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #safetensors #mt5 #text2text-generation #summarization #de #dataset-swiss_text_2019 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mT5-small-sum-de-mit-v1
=======================
This is a German summarization model. It is based on the multilingual T5 model google/mt5-small. The special characteristic of this model is that, unlike many other models, it is licensed under a permissive open source license (MIT). Among other things, this license allows commercial use.

* max\_source\_length: 800
* max\_target\_length: 96
* warmup\_ratio: 0.3
* number of train epochs: 10
* gradient accumulation steps: 2
* learning rate: 5e-5
Datasets and Preprocessing
--------------------------
The datasets were preprocessed as follows:
The summary was tokenized with the google/mt5-small tokenizer. Then only the records with no more than 94 summary tokens were selected.
This model is trained on the following dataset:
We have permission to use the Swisstext dataset and release the resulting summarization model under MIT license (see URL).
Evaluation on MLSUM German Test Set (no beams)
----------------------------------------------
License
-------
Copyright (c) 2021 Philip May, Deutsche Telekom AG
Licensed under the MIT License (the "License"); you may not use this work except in compliance with the License. You may obtain a copy of the License by reviewing the file LICENSE in the repository.
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #mt5 #text2text-generation #summarization #de #dataset-swiss_text_2019 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-NER-finetuned-ner
This model is a fine-tuned version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) on the x_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4380
- Precision: 0.2274
- Recall: 0.1119
- F1: 0.1499
- Accuracy: 0.8485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0822 | 1.0 | 878 | 1.1648 | 0.2068 | 0.1101 | 0.1437 | 0.8471 |
| 0.0102 | 2.0 | 1756 | 1.2697 | 0.2073 | 0.1110 | 0.1445 | 0.8447 |
| 0.0049 | 3.0 | 2634 | 1.3945 | 0.2006 | 0.1073 | 0.1399 | 0.8368 |
| 0.0025 | 4.0 | 3512 | 1.3994 | 0.2243 | 0.1126 | 0.1499 | 0.8501 |
| 0.0011 | 5.0 | 4390 | 1.4380 | 0.2274 | 0.1119 | 0.1499 | 0.8485 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["x_glue"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-NER-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "x_glue", "type": "x_glue", "args": "ner"}, "metrics": [{"type": "precision", "value": 0.2273838630806846, "name": "Precision"}, {"type": "recall", "value": 0.11185727172496743, "name": "Recall"}, {"type": "f1", "value": 0.14994961370507223, "name": "F1"}, {"type": "accuracy", "value": 0.8485324947589099, "name": "Accuracy"}]}]}]}
|
deval/bert-base-NER-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:x_glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-x_glue #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-NER-finetuned-ner
===========================
This model is a fine-tuned version of dslim/bert-base-NER on the x\_glue dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4380
* Precision: 0.2274
* Recall: 0.1119
* F1: 0.1499
* Accuracy: 0.8485
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-x_glue #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the x_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7979
- Precision: 0.0919
- Recall: 0.1249
- F1: 0.1059
- Accuracy: 0.4927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1773 | 1.0 | 878 | 1.7953 | 0.1025 | 0.1352 | 0.1166 | 0.5058 |
| 0.0397 | 2.0 | 1756 | 2.0827 | 0.0906 | 0.1230 | 0.1043 | 0.4888 |
| 0.022 | 3.0 | 2634 | 2.8677 | 0.0864 | 0.1260 | 0.1025 | 0.4098 |
| 0.0126 | 4.0 | 3512 | 2.8584 | 0.0848 | 0.1201 | 0.0994 | 0.4424 |
| 0.0085 | 5.0 | 4390 | 2.7979 | 0.0919 | 0.1249 | 0.1059 | 0.4927 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["x_glue"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "x_glue", "type": "x_glue", "args": "ner"}, "metrics": [{"type": "precision", "value": 0.09187560910782316, "name": "Precision"}, {"type": "recall", "value": 0.1248795761078998, "name": "Recall"}, {"type": "f1", "value": 0.10586493798172632, "name": "F1"}, {"type": "accuracy", "value": 0.492660102891609, "name": "Accuracy"}]}]}]}
|
deval/bert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:x_glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-x_glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-ner
===============================
This model is a fine-tuned version of bert-base-uncased on the x\_glue dataset.
It achieves the following results on the evaluation set:
* Loss: 2.7979
* Precision: 0.0919
* Recall: 0.1249
* F1: 0.1059
* Accuracy: 0.4927
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-x_glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0606
- Precision: 0.9277
- Recall: 0.9385
- F1: 0.9330
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2454 | 1.0 | 878 | 0.0692 | 0.9106 | 0.9212 | 0.9159 | 0.9809 |
| 0.0517 | 2.0 | 1756 | 0.0616 | 0.9203 | 0.9352 | 0.9277 | 0.9834 |
| 0.0314 | 3.0 | 2634 | 0.0606 | 0.9277 | 0.9385 | 0.9330 | 0.9844 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9276788676324229, "name": "Precision"}, {"type": "recall", "value": 0.9384718648618414, "name": "Recall"}, {"type": "f1", "value": 0.9330441552663775, "name": "F1"}, {"type": "accuracy", "value": 0.9843836878643939, "name": "Accuracy"}]}]}]}
|
deval/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0606
* Precision: 0.9277
* Recall: 0.9385
* F1: 0.9330
* Accuracy: 0.9844
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
# Fintuned Wav2Vec of Timit - 4001 checkpoint
|
{}
|
devin132/w2v-timit-ft-4001
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
|
# Fintuned Wav2Vec of Timit - 4001 checkpoint
|
[
"# Fintuned Wav2Vec of Timit - 4001 checkpoint"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n",
"# Fintuned Wav2Vec of Timit - 4001 checkpoint"
] |
fill-mask
|
transformers
|
# Dummy Model
This be a dummmmmy
|
{}
|
devtrent/dummy-model
| null |
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #camembert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
# Dummy Model
This be a dummmmmy
|
[
"# Dummy Model\n\nThis be a dummmmmy"
] |
[
"TAGS\n#transformers #pytorch #camembert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"# Dummy Model\n\nThis be a dummmmmy"
] |
text-classification
|
transformers
|
DistilBERT model trained on OSCAR nepali corpus from huggingface datasets.
We trained the DitilBERT language model on OSCAR nepali corpus and then for downstream sentiment analysis task. The dataset we used for sentiment analysis was first extracted from twitter filtering for devenagari text then labelled it as postive,negative and neutral. However, since neutral labels exceeded the positive and negative tweets we decided to use only positive and negative tweets for ease of training.
LABEL_1 = negative
LABEL_0 = positive
|
{}
|
dexhrestha/Nepali-DistilBERT
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
DistilBERT model trained on OSCAR nepali corpus from huggingface datasets.
We trained the DitilBERT language model on OSCAR nepali corpus and then for downstream sentiment analysis task. The dataset we used for sentiment analysis was first extracted from twitter filtering for devenagari text then labelled it as postive,negative and neutral. However, since neutral labels exceeded the positive and negative tweets we decided to use only positive and negative tweets for ease of training.
LABEL_1 = negative
LABEL_0 = positive
|
[] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
#Aerith GPT model
|
{"tags": ["conversational"]}
|
df4rfrrf/DialoGPT-medium-Aerith
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Aerith GPT model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification
|
transformers
|
This the repo for the final project
|
{}
|
dhairya2303/bert-base-uncased-emotion-AD
| null |
[
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
This the repo for the final project
|
[] |
[
"TAGS\n#transformers #tf #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
{'sadness':0,'joy':1,'love':2,'anger':3,'fear':4,'surprise':5}
|
{}
|
dhairya2303/bert-base-uncased-emotion_holler
| null |
[
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
{'sadness':0,'joy':1,'love':2,'anger':3,'fear':4,'surprise':5}
|
[] |
[
"TAGS\n#transformers #tf #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-funsd-test
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1
- Datasets 1.18.0
- Tokenizers 0.11.0
|
{"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "layoutlmv2-finetuned-funsd-test", "results": []}]}
|
dhanesh123in/layoutlmv2-finetuned-funsd-test
| null |
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #layoutlmv2 #token-classification #generated_from_trainer #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
# layoutlmv2-finetuned-funsd-test
This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1
- Datasets 1.18.0
- Tokenizers 0.11.0
|
[
"# layoutlmv2-finetuned-funsd-test\n\nThis model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 1000",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1\n- Datasets 1.18.0\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #layoutlmv2 #token-classification #generated_from_trainer #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# layoutlmv2-finetuned-funsd-test\n\nThis model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 1000",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1\n- Datasets 1.18.0\n- Tokenizers 0.11.0"
] |
text-generation
|
transformers
|
# AMy San
|
{"tags": ["conversational"]}
|
dhanushlnaik/amySan
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# AMy San
|
[
"# AMy San"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# AMy San"
] |
text-classification
|
transformers
|
"hello"
|
{}
|
dhikri/question_answering_glue
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
"hello"
|
[] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
# DistilBert Dummy Sentiment Model
## Purpose
This is a dummy model that can be used for testing the transformers `pipeline` with the task `sentiment-analysis`. It should always give random results (i.e. `{"label": "negative", "score": 0.5}`).
## How to use
```python
classifier = pipeline("sentiment-analysis", "dhpollack/distilbert-dummy-sentiment")
results = classifier(["this is a test", "another test"])
```
## Notes
This was created as follows:
1. Create a vocab.txt file (in /tmp/vocab.txt in this example).
```
[UNK]
[SEP]
[PAD]
[CLS]
[MASK]
```
2. Open a python shell:
```python
import transformers
config = transformers.DistilBertConfig(vocab_size=5, n_layers=1, n_heads=1, dim=1, hidden_dim=4 * 1, num_labels=2, id2label={0: "negative", 1: "positive"}, label2id={"negative": 0, "positive": 1})
model = transformers.DistilBertForSequenceClassification(config)
tokenizer = transformers.DistilBertTokenizer("/tmp/vocab.txt", model_max_length=512)
config.save_pretrained(".")
model.save_pretrained(".")
tokenizer.save_pretrained(".")
```
|
{"language": ["multilingual", "en"], "tags": ["sentiment-analysis", "testing", "unit tests"]}
|
dhpollack/distilbert-dummy-sentiment
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"sentiment-analysis",
"testing",
"unit tests",
"multilingual",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"multilingual",
"en"
] |
TAGS
#transformers #pytorch #distilbert #text-classification #sentiment-analysis #testing #unit tests #multilingual #en #autotrain_compatible #endpoints_compatible #region-us
|
# DistilBert Dummy Sentiment Model
## Purpose
This is a dummy model that can be used for testing the transformers 'pipeline' with the task 'sentiment-analysis'. It should always give random results (i.e. '{"label": "negative", "score": 0.5}').
## How to use
## Notes
This was created as follows:
1. Create a URL file (in /tmp/URL in this example).
2. Open a python shell:
|
[
"# DistilBert Dummy Sentiment Model",
"## Purpose\n\nThis is a dummy model that can be used for testing the transformers 'pipeline' with the task 'sentiment-analysis'. It should always give random results (i.e. '{\"label\": \"negative\", \"score\": 0.5}').",
"## How to use",
"## Notes\n\nThis was created as follows:\n\n1. Create a URL file (in /tmp/URL in this example).\n\n\n\n2. Open a python shell:"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #sentiment-analysis #testing #unit tests #multilingual #en #autotrain_compatible #endpoints_compatible #region-us \n",
"# DistilBert Dummy Sentiment Model",
"## Purpose\n\nThis is a dummy model that can be used for testing the transformers 'pipeline' with the task 'sentiment-analysis'. It should always give random results (i.e. '{\"label\": \"negative\", \"score\": 0.5}').",
"## How to use",
"## Notes\n\nThis was created as follows:\n\n1. Create a URL file (in /tmp/URL in this example).\n\n\n\n2. Open a python shell:"
] |
text-classification
|
transformers
|
### TUNiB-Electra Stereotype Detector
Finetuned TUNiB-Electra base with K-StereoSet.
Original Code: https://github.com/newfull5/Stereotype-Detector
|
{}
|
dhtocks/tunib-electra-stereotype-classifier
| null |
[
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #electra #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
### TUNiB-Electra Stereotype Detector
Finetuned TUNiB-Electra base with K-StereoSet.
Original Code: URL
|
[
"### TUNiB-Electra Stereotype Detector\n\nFinetuned TUNiB-Electra base with K-StereoSet.\n\nOriginal Code: URL"
] |
[
"TAGS\n#transformers #pytorch #electra #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"### TUNiB-Electra Stereotype Detector\n\nFinetuned TUNiB-Electra base with K-StereoSet.\n\nOriginal Code: URL"
] |
feature-extraction
|
transformers
|
Language Model 2
For Language agnostic Dense Passage Retrieval
|
{}
|
diarsabri/LaDPR-context-encoder
| null |
[
"transformers",
"pytorch",
"dpr",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #dpr #feature-extraction #endpoints_compatible #region-us
|
Language Model 2
For Language agnostic Dense Passage Retrieval
|
[] |
[
"TAGS\n#transformers #pytorch #dpr #feature-extraction #endpoints_compatible #region-us \n"
] |
feature-extraction
|
transformers
|
Language Model 1
For Language agnostic Dense Passage Retrieval
|
{}
|
diarsabri/LaDPR-query-encoder
| null |
[
"transformers",
"pytorch",
"dpr",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #dpr #feature-extraction #endpoints_compatible #region-us
|
Language Model 1
For Language agnostic Dense Passage Retrieval
|
[] |
[
"TAGS\n#transformers #pytorch #dpr #feature-extraction #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53
---
language: gl
datasets:
- OpenSLR 77
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Galician Wav2Vec2-Large-XLSR-53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR
type: openslr
args: gl
metrics:
- name: Test WER
type: wer
value: 16.79
---
Wav2Vec2-Large-XLSR-53-galician
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on galician using the [OpenSLR](https://huggingface.co/datasets/common_voice) dataset
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "gl", split="test[:2%]") # This is not available yet, load OpenSLR or your dataset instead
processor = Wav2Vec2Processor.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl")
model = Wav2Vec2ForCTC.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Galician test data of Common Voice (when it is released).
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "gl", split="test") # This is not available yet, load OpenSLR or your dataset instead
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl")
model = Wav2Vec2ForCTC.from_pretrained("diego-fustes/wav2vec2-large-xlsr-gl")
model.to("cuda")
chars_to_ignore_regex = '[^a-záéíóúñ ]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 16.79 % on OpenSLR split
## Training
The OpenSLR [SLR77](https://openslr.org/77/) dataset was used for training and validation. The dataset was split as 70% for training, 15% for validation and 15% for testing
The script used for training can be found [here](https://github.com/diego-fustes/xlsr-fine-tuning-gl)
|
{}
|
diego-fustes/wav2vec2-large-xlsr-gl
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53
---
language: gl
datasets:
- OpenSLR 77
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Galician Wav2Vec2-Large-XLSR-53
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR
type: openslr
args: gl
metrics:
- name: Test WER
type: wer
value: 16.79
---
Wav2Vec2-Large-XLSR-53-galician
Fine-tuned facebook/wav2vec2-large-xlsr-53 on galician using the OpenSLR dataset
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Galician test data of Common Voice (when it is released).
Test Result: 16.79 % on OpenSLR split
## Training
The OpenSLR SLR77 dataset was used for training and validation. The dataset was split as 70% for training, 15% for validation and 15% for testing
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-53\n\n---\nlanguage: gl\ndatasets:\n- OpenSLR 77\nmetrics:\n- wer\ntags:\n- audio\n- automatic-speech-recognition\n- speech\n- xlsr-fine-tuning-week\nlicense: apache-2.0\nmodel-index:\n- name: Galician Wav2Vec2-Large-XLSR-53\n results:\n - task: \n name: Speech Recognition\n type: automatic-speech-recognition\n dataset:\n name: OpenSLR\n type: openslr\n args: gl\n metrics:\n - name: Test WER\n type: wer\n value: 16.79\n---\n\nWav2Vec2-Large-XLSR-53-galician\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on galician using the OpenSLR dataset\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Galician test data of Common Voice (when it is released).\n\n\n\nTest Result: 16.79 % on OpenSLR split",
"## Training\n\nThe OpenSLR SLR77 dataset was used for training and validation. The dataset was split as 70% for training, 15% for validation and 15% for testing \n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53\n\n---\nlanguage: gl\ndatasets:\n- OpenSLR 77\nmetrics:\n- wer\ntags:\n- audio\n- automatic-speech-recognition\n- speech\n- xlsr-fine-tuning-week\nlicense: apache-2.0\nmodel-index:\n- name: Galician Wav2Vec2-Large-XLSR-53\n results:\n - task: \n name: Speech Recognition\n type: automatic-speech-recognition\n dataset:\n name: OpenSLR\n type: openslr\n args: gl\n metrics:\n - name: Test WER\n type: wer\n value: 16.79\n---\n\nWav2Vec2-Large-XLSR-53-galician\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on galician using the OpenSLR dataset\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Galician test data of Common Voice (when it is released).\n\n\n\nTest Result: 16.79 % on OpenSLR split",
"## Training\n\nThe OpenSLR SLR77 dataset was used for training and validation. The dataset was split as 70% for training, 15% for validation and 15% for testing \n\nThe script used for training can be found here"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "model-index": [{"name": "t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1", "results": []}]}
|
diegor2/t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetu-truncated-d22eed
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16_en_ro_pre_processed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1
This model is a fine-tuned version of patrickvonplaten/t5-tiny-random on the wmt16_en_ro_pre_processed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
[
"# t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1\n\nThis model is a fine-tuned version of patrickvonplaten/t5-tiny-random on the wmt16_en_ro_pre_processed dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16_en_ro_pre_processed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1\n\nThis model is a fine-tuned version of patrickvonplaten/t5-tiny-random on the wmt16_en_ro_pre_processed dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro-TRAIN_EPOCHS-1
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4897
- Bleu: 0.0002
- Gen Len: 9.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 6.2585 | 1.0 | 76290 | 6.4897 | 0.0002 | 9.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "metrics": ["bleu"], "model-index": [{"name": "t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro-TRAIN_EPOCHS-1", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16_en_ro_pre_processed", "type": "wmt16_en_ro_pre_processed", "args": "enro"}, "metrics": [{"type": "bleu", "value": 0.0002, "name": "Bleu"}]}]}]}
|
diegor2/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetu-truncated-41f800
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16_en_ro_pre_processed #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-tiny-random-length-96-learning\_rate-2e-05-weight\_decay-0.005-finetuned-en-to-ro-TRAIN\_EPOCHS-1
====================================================================================================
This model is a fine-tuned version of patrickvonplaten/t5-tiny-random on the wmt16\_en\_ro\_pre\_processed dataset.
It achieves the following results on the evaluation set:
* Loss: 6.4897
* Bleu: 0.0002
* Gen Len: 9.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16_en_ro_pre_processed #model-index #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "model-index": [{"name": "t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1", "results": []}]}
|
diegor2/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16_en_ro_pre_processed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1
This model is a fine-tuned version of patrickvonplaten/t5-tiny-random on the wmt16_en_ro_pre_processed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
[
"# t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1\n\nThis model is a fine-tuned version of patrickvonplaten/t5-tiny-random on the wmt16_en_ro_pre_processed dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16_en_ro_pre_processed #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro-TRAIN_EPOCHS-1\n\nThis model is a fine-tuned version of patrickvonplaten/t5-tiny-random on the wmt16_en_ro_pre_processed dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
sentence-similarity
|
transformers
|
# Twitter4SSE
This model maps texts to 768 dimensional dense embeddings that encode semantic similarity.
It was trained with Multiple Negatives Ranking Loss (MNRL) on a Twitter dataset.
It was initialized from [BERTweet](https://huggingface.co/vinai/bertweet-base) and trained with [Sentence-transformers](https://www.sbert.net/).
## Usage
The model is easier to use with sentence-trainsformers library
```
pip install -U sentence-transformers
```
```
from sentence_transformers import SentenceTransformer
sentences = ["This is the first tweet", "This is the second tweet"]
model = SentenceTransformer('digio/Twitter4SSE')
embeddings = model.encode(sentences)
print(embeddings)
```
Without sentence-transfomer library, please refer to [this repository](https://huggingface.co/sentence-transformers) for detailed instructions on how to use Sentence Transformers on Huggingface.
## Citing & Authors
The official paper [Exploiting Twitter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings](https://arxiv.org/abs/2110.02030) will be presented at EMNLP 2021. Further details will be available soon.
```
@inproceedings{di-giovanni-brambilla-2021-exploiting,
title = "Exploiting {T}witter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings",
author = "Di Giovanni, Marco and
Brambilla, Marco",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.780",
pages = "9902--9910",
}
```
The official code is available on [GitHub](https://github.com/marco-digio/Twitter4SSE)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["Pytorch", "Sentence Transformers", "Transformers"], "pipeline_tag": "sentence-similarity"}
|
digio/Twitter4SSE
| null |
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"Pytorch",
"Sentence Transformers",
"Transformers",
"sentence-similarity",
"en",
"arxiv:2110.02030",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2110.02030"
] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #feature-extraction #Pytorch #Sentence Transformers #Transformers #sentence-similarity #en #arxiv-2110.02030 #license-apache-2.0 #endpoints_compatible #region-us
|
# Twitter4SSE
This model maps texts to 768 dimensional dense embeddings that encode semantic similarity.
It was trained with Multiple Negatives Ranking Loss (MNRL) on a Twitter dataset.
It was initialized from BERTweet and trained with Sentence-transformers.
## Usage
The model is easier to use with sentence-trainsformers library
Without sentence-transfomer library, please refer to this repository for detailed instructions on how to use Sentence Transformers on Huggingface.
## Citing & Authors
The official paper Exploiting Twitter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings will be presented at EMNLP 2021. Further details will be available soon.
The official code is available on GitHub
|
[
"# Twitter4SSE\n\nThis model maps texts to 768 dimensional dense embeddings that encode semantic similarity. \nIt was trained with Multiple Negatives Ranking Loss (MNRL) on a Twitter dataset. \nIt was initialized from BERTweet and trained with Sentence-transformers.",
"## Usage\n\nThe model is easier to use with sentence-trainsformers library\n\n\n\n\n\n\nWithout sentence-transfomer library, please refer to this repository for detailed instructions on how to use Sentence Transformers on Huggingface.",
"## Citing & Authors\n\nThe official paper Exploiting Twitter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings will be presented at EMNLP 2021. Further details will be available soon. \n\n\n\nThe official code is available on GitHub"
] |
[
"TAGS\n#transformers #pytorch #roberta #feature-extraction #Pytorch #Sentence Transformers #Transformers #sentence-similarity #en #arxiv-2110.02030 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Twitter4SSE\n\nThis model maps texts to 768 dimensional dense embeddings that encode semantic similarity. \nIt was trained with Multiple Negatives Ranking Loss (MNRL) on a Twitter dataset. \nIt was initialized from BERTweet and trained with Sentence-transformers.",
"## Usage\n\nThe model is easier to use with sentence-trainsformers library\n\n\n\n\n\n\nWithout sentence-transfomer library, please refer to this repository for detailed instructions on how to use Sentence Transformers on Huggingface.",
"## Citing & Authors\n\nThe official paper Exploiting Twitter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings will be presented at EMNLP 2021. Further details will be available soon. \n\n\n\nThe official code is available on GitHub"
] |
zero-shot-classification
|
transformers
|
# COVID-Twitter-BERT v2 MNLI
## Model description
This model provides a zero-shot classifier to be used in cases where it is not possible to finetune CT-BERT on a specific task, due to lack of labelled data.
The technique is based on [Yin et al.](https://arxiv.org/abs/1909.00161).
The article describes a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers.
The model is already finetuned on 400'000 generaic logical tasks.
We can then use it as a zero-shot classifier by reformulating the classification task as a question.
Let's say we want to classify COVID-tweets as vaccine-related and not vaccine-related.
The typical way would be to collect a few hunder pre-annotated tweets and organise them in two classes.
Then you would finetune the model on this.
With the zero-shot mnli-classifier, you can instead reformulate your question as "This text is about vaccines", and use this directly on inference - without any training.
Find more info about the model on our [GitHub page](https://github.com/digitalepidemiologylab/covid-twitter-bert).
## Usage
Please note that how you formulate the question can give slightly different results.
Collecting a training set and finetuning on this, will most likely give you better accuracy.
The easiest way to try this out is by using the Hugging Face pipeline.
This uses the default Enlish template where it puts the text "This example is " in front of the text.
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="digitalepidemiologylab/covid-twitter-bert-v2-mnli")
```
You can then use this pipeline to classify sequences into any of the class names you specify.
```python
sequence_to_classify = 'To stop the pandemic it is important that everyone turns up for their shots.'
candidate_labels = ['health', 'sport', 'vaccine','guns']
hypothesis_template = 'This example is {}.'
classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True)
```
## Training procedure
The model is finetuned on the 400k large [MNLI-task](https://cims.nyu.edu/~sbowman/multinli/).
## References
```bibtex
@article{muller2020covid,
title={COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter},
author={M{\"u}ller, Martin and Salath{\'e}, Marcel and Kummervold, Per E},
journal={arXiv preprint arXiv:2005.07503},
year={2020}
}
```
or
```
Martin Müller, Marcel Salathé, and Per E. Kummervold.
COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter.
arXiv preprint arXiv:2005.07503 (2020).
```
|
{"language": ["en"], "license": "mit", "tags": ["Twitter", "COVID-19", "text-classification", "pytorch", "tensorflow", "bert"], "datasets": ["mnli"], "thumbnail": "https://raw.githubusercontent.com/digitalepidemiologylab/covid-twitter-bert/master/images/COVID-Twitter-BERT_small.png", "pipeline_tag": "zero-shot-classification", "widget": [{"text": "To stop the pandemic it is important that everyone turns up for their shots.", "candidate_labels": "health, sport, vaccine, guns"}]}
|
digitalepidemiologylab/covid-twitter-bert-v2-mnli
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"Twitter",
"COVID-19",
"tensorflow",
"zero-shot-classification",
"en",
"dataset:mnli",
"arxiv:1909.00161",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1909.00161"
] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #bert #text-classification #Twitter #COVID-19 #tensorflow #zero-shot-classification #en #dataset-mnli #arxiv-1909.00161 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# COVID-Twitter-BERT v2 MNLI
## Model description
This model provides a zero-shot classifier to be used in cases where it is not possible to finetune CT-BERT on a specific task, due to lack of labelled data.
The technique is based on Yin et al..
The article describes a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers.
The model is already finetuned on 400'000 generaic logical tasks.
We can then use it as a zero-shot classifier by reformulating the classification task as a question.
Let's say we want to classify COVID-tweets as vaccine-related and not vaccine-related.
The typical way would be to collect a few hunder pre-annotated tweets and organise them in two classes.
Then you would finetune the model on this.
With the zero-shot mnli-classifier, you can instead reformulate your question as "This text is about vaccines", and use this directly on inference - without any training.
Find more info about the model on our GitHub page.
## Usage
Please note that how you formulate the question can give slightly different results.
Collecting a training set and finetuning on this, will most likely give you better accuracy.
The easiest way to try this out is by using the Hugging Face pipeline.
This uses the default Enlish template where it puts the text "This example is " in front of the text.
You can then use this pipeline to classify sequences into any of the class names you specify.
## Training procedure
The model is finetuned on the 400k large MNLI-task.
## References
or
|
[
"# COVID-Twitter-BERT v2 MNLI",
"## Model description\nThis model provides a zero-shot classifier to be used in cases where it is not possible to finetune CT-BERT on a specific task, due to lack of labelled data.\n\nThe technique is based on Yin et al..\nThe article describes a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers.\nThe model is already finetuned on 400'000 generaic logical tasks.\nWe can then use it as a zero-shot classifier by reformulating the classification task as a question.\n\nLet's say we want to classify COVID-tweets as vaccine-related and not vaccine-related.\nThe typical way would be to collect a few hunder pre-annotated tweets and organise them in two classes.\nThen you would finetune the model on this.\n\nWith the zero-shot mnli-classifier, you can instead reformulate your question as \"This text is about vaccines\", and use this directly on inference - without any training.\n\nFind more info about the model on our GitHub page.",
"## Usage\nPlease note that how you formulate the question can give slightly different results.\nCollecting a training set and finetuning on this, will most likely give you better accuracy.\n\nThe easiest way to try this out is by using the Hugging Face pipeline.\nThis uses the default Enlish template where it puts the text \"This example is \" in front of the text.\n\n\nYou can then use this pipeline to classify sequences into any of the class names you specify.",
"## Training procedure\nThe model is finetuned on the 400k large MNLI-task.",
"## References\n\nor"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #text-classification #Twitter #COVID-19 #tensorflow #zero-shot-classification #en #dataset-mnli #arxiv-1909.00161 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# COVID-Twitter-BERT v2 MNLI",
"## Model description\nThis model provides a zero-shot classifier to be used in cases where it is not possible to finetune CT-BERT on a specific task, due to lack of labelled data.\n\nThe technique is based on Yin et al..\nThe article describes a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers.\nThe model is already finetuned on 400'000 generaic logical tasks.\nWe can then use it as a zero-shot classifier by reformulating the classification task as a question.\n\nLet's say we want to classify COVID-tweets as vaccine-related and not vaccine-related.\nThe typical way would be to collect a few hunder pre-annotated tweets and organise them in two classes.\nThen you would finetune the model on this.\n\nWith the zero-shot mnli-classifier, you can instead reformulate your question as \"This text is about vaccines\", and use this directly on inference - without any training.\n\nFind more info about the model on our GitHub page.",
"## Usage\nPlease note that how you formulate the question can give slightly different results.\nCollecting a training set and finetuning on this, will most likely give you better accuracy.\n\nThe easiest way to try this out is by using the Hugging Face pipeline.\nThis uses the default Enlish template where it puts the text \"This example is \" in front of the text.\n\n\nYou can then use this pipeline to classify sequences into any of the class names you specify.",
"## Training procedure\nThe model is finetuned on the 400k large MNLI-task.",
"## References\n\nor"
] |
null |
transformers
|
# COVID-Twitter-BERT v2
## Model description
BERT-large-uncased model, pretrained on a corpus of messages from Twitter about COVID-19. This model is identical to [covid-twitter-bert](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert) - but trained on more data, resulting in higher downstream performance.
Find more info on our [GitHub page](https://github.com/digitalepidemiologylab/covid-twitter-bert).
## Intended uses & limitations
The model can e.g. be used in the `fill-mask` task (see below). You can also use the model without the MLM/NSP heads and train a classifier with it.
#### How to use
```python
from transformers import pipeline
import json
pipe = pipeline(task='fill-mask', model='digitalepidemiologylab/covid-twitter-bert-v2')
out = pipe(f"In places with a lot of people, it's a good idea to wear a {pipe.tokenizer.mask_token}")
print(json.dumps(out, indent=4))
[
{
"sequence": "[CLS] in places with a lot of people, it's a good idea to wear a mask [SEP]",
"score": 0.9998226761817932,
"token": 7308,
"token_str": "mask"
},
...
]
```
## Training procedure
This model was trained on 97M unique tweets (1.2B training examples) collected between January 12 and July 5, 2020 containing at least one of the keywords "wuhan", "ncov", "coronavirus", "covid", or "sars-cov-2". These tweets were filtered and preprocessed to reach a final sample of 22.5M tweets (containing 40.7M sentences and 633M tokens) which were used for training.
## Eval results
The model was evaluated based on downstream Twitter text classification tasks from previous SemEval challenges.
### BibTeX entry and citation info
```bibtex
@article{muller2020covid,
title={COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter},
author={M{\"u}ller, Martin and Salath{\'e}, Marcel and Kummervold, Per E},
journal={arXiv preprint arXiv:2005.07503},
year={2020}
}
```
or
```Martin Müller, Marcel Salathé, and Per E. Kummervold.
COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter.
arXiv preprint arXiv:2005.07503 (2020).
```
|
{"language": "en", "license": "mit", "tags": ["Twitter", "COVID-19"], "thumbnail": "https://raw.githubusercontent.com/digitalepidemiologylab/covid-twitter-bert/master/images/COVID-Twitter-BERT_small.png"}
|
digitalepidemiologylab/covid-twitter-bert-v2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"Twitter",
"COVID-19",
"en",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #bert #Twitter #COVID-19 #en #license-mit #endpoints_compatible #has_space #region-us
|
# COVID-Twitter-BERT v2
## Model description
BERT-large-uncased model, pretrained on a corpus of messages from Twitter about COVID-19. This model is identical to covid-twitter-bert - but trained on more data, resulting in higher downstream performance.
Find more info on our GitHub page.
## Intended uses & limitations
The model can e.g. be used in the 'fill-mask' task (see below). You can also use the model without the MLM/NSP heads and train a classifier with it.
#### How to use
## Training procedure
This model was trained on 97M unique tweets (1.2B training examples) collected between January 12 and July 5, 2020 containing at least one of the keywords "wuhan", "ncov", "coronavirus", "covid", or "sars-cov-2". These tweets were filtered and preprocessed to reach a final sample of 22.5M tweets (containing 40.7M sentences and 633M tokens) which were used for training.
## Eval results
The model was evaluated based on downstream Twitter text classification tasks from previous SemEval challenges.
### BibTeX entry and citation info
or
|
[
"# COVID-Twitter-BERT v2",
"## Model description\n\nBERT-large-uncased model, pretrained on a corpus of messages from Twitter about COVID-19. This model is identical to covid-twitter-bert - but trained on more data, resulting in higher downstream performance.\n\nFind more info on our GitHub page.",
"## Intended uses & limitations\n\nThe model can e.g. be used in the 'fill-mask' task (see below). You can also use the model without the MLM/NSP heads and train a classifier with it.",
"#### How to use",
"## Training procedure\nThis model was trained on 97M unique tweets (1.2B training examples) collected between January 12 and July 5, 2020 containing at least one of the keywords \"wuhan\", \"ncov\", \"coronavirus\", \"covid\", or \"sars-cov-2\". These tweets were filtered and preprocessed to reach a final sample of 22.5M tweets (containing 40.7M sentences and 633M tokens) which were used for training.",
"## Eval results\nThe model was evaluated based on downstream Twitter text classification tasks from previous SemEval challenges.",
"### BibTeX entry and citation info\n\n\n\nor"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #Twitter #COVID-19 #en #license-mit #endpoints_compatible #has_space #region-us \n",
"# COVID-Twitter-BERT v2",
"## Model description\n\nBERT-large-uncased model, pretrained on a corpus of messages from Twitter about COVID-19. This model is identical to covid-twitter-bert - but trained on more data, resulting in higher downstream performance.\n\nFind more info on our GitHub page.",
"## Intended uses & limitations\n\nThe model can e.g. be used in the 'fill-mask' task (see below). You can also use the model without the MLM/NSP heads and train a classifier with it.",
"#### How to use",
"## Training procedure\nThis model was trained on 97M unique tweets (1.2B training examples) collected between January 12 and July 5, 2020 containing at least one of the keywords \"wuhan\", \"ncov\", \"coronavirus\", \"covid\", or \"sars-cov-2\". These tweets were filtered and preprocessed to reach a final sample of 22.5M tweets (containing 40.7M sentences and 633M tokens) which were used for training.",
"## Eval results\nThe model was evaluated based on downstream Twitter text classification tasks from previous SemEval challenges.",
"### BibTeX entry and citation info\n\n\n\nor"
] |
null |
transformers
|
# COVID-Twitter-BERT (CT-BERT) v1
:warning: _You may want to use the [v2 model](https://huggingface.co/digitalepidemiologylab/covid-twitter-bert-v2) which was trained on more recent data and yields better performance_ :warning:
BERT-large-uncased model, pretrained on a corpus of messages from Twitter about COVID-19. Find more info on our [GitHub page](https://github.com/digitalepidemiologylab/covid-twitter-bert).
## Overview
This model was trained on 160M tweets collected between January 12 and April 16, 2020 containing at least one of the keywords "wuhan", "ncov", "coronavirus", "covid", or "sars-cov-2". These tweets were filtered and preprocessed to reach a final sample of 22.5M tweets (containing 40.7M sentences and 633M tokens) which were used for training.
This model was evaluated based on downstream classification tasks, but it could be used for any other NLP task which can leverage contextual embeddings.
In order to achieve best results, make sure to use the same text preprocessing as we did for pretraining. This involves replacing user mentions, urls and emojis. You can find a script on our projects [GitHub repo](https://github.com/digitalepidemiologylab/covid-twitter-bert).
## Example usage
```python
tokenizer = AutoTokenizer.from_pretrained("digitalepidemiologylab/covid-twitter-bert")
model = AutoModel.from_pretrained("digitalepidemiologylab/covid-twitter-bert")
```
You can also use the model with the `pipeline` interface:
```python
from transformers import pipeline
import json
pipe = pipeline(task='fill-mask', model='digitalepidemiologylab/covid-twitter-bert-v2')
out = pipe(f"In places with a lot of people, it's a good idea to wear a {pipe.tokenizer.mask_token}")
print(json.dumps(out, indent=4))
[
{
"sequence": "[CLS] in places with a lot of people, it's a good idea to wear a mask [SEP]",
"score": 0.9959408044815063,
"token": 7308,
"token_str": "mask"
},
...
]
```
## References
[1] Martin Müller, Marcel Salaté, Per E Kummervold. "COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter" arXiv preprint arXiv:2005.07503 (2020).
|
{"language": "en", "license": "mit", "tags": ["Twitter", "COVID-19"], "thumbnail": "https://raw.githubusercontent.com/digitalepidemiologylab/covid-twitter-bert/master/images/COVID-Twitter-BERT_small.png"}
|
digitalepidemiologylab/covid-twitter-bert
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"Twitter",
"COVID-19",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #bert #Twitter #COVID-19 #en #license-mit #endpoints_compatible #region-us
|
# COVID-Twitter-BERT (CT-BERT) v1
:warning: _You may want to use the v2 model which was trained on more recent data and yields better performance_ :warning:
BERT-large-uncased model, pretrained on a corpus of messages from Twitter about COVID-19. Find more info on our GitHub page.
## Overview
This model was trained on 160M tweets collected between January 12 and April 16, 2020 containing at least one of the keywords "wuhan", "ncov", "coronavirus", "covid", or "sars-cov-2". These tweets were filtered and preprocessed to reach a final sample of 22.5M tweets (containing 40.7M sentences and 633M tokens) which were used for training.
This model was evaluated based on downstream classification tasks, but it could be used for any other NLP task which can leverage contextual embeddings.
In order to achieve best results, make sure to use the same text preprocessing as we did for pretraining. This involves replacing user mentions, urls and emojis. You can find a script on our projects GitHub repo.
## Example usage
You can also use the model with the 'pipeline' interface:
## References
[1] Martin Müller, Marcel Salaté, Per E Kummervold. "COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter" arXiv preprint arXiv:2005.07503 (2020).
|
[
"# COVID-Twitter-BERT (CT-BERT) v1\n\n:warning: _You may want to use the v2 model which was trained on more recent data and yields better performance_ :warning: \n\n\nBERT-large-uncased model, pretrained on a corpus of messages from Twitter about COVID-19. Find more info on our GitHub page.",
"## Overview\nThis model was trained on 160M tweets collected between January 12 and April 16, 2020 containing at least one of the keywords \"wuhan\", \"ncov\", \"coronavirus\", \"covid\", or \"sars-cov-2\". These tweets were filtered and preprocessed to reach a final sample of 22.5M tweets (containing 40.7M sentences and 633M tokens) which were used for training.\n\nThis model was evaluated based on downstream classification tasks, but it could be used for any other NLP task which can leverage contextual embeddings. \n\nIn order to achieve best results, make sure to use the same text preprocessing as we did for pretraining. This involves replacing user mentions, urls and emojis. You can find a script on our projects GitHub repo.",
"## Example usage\n\n\nYou can also use the model with the 'pipeline' interface:",
"## References\n[1] Martin Müller, Marcel Salaté, Per E Kummervold. \"COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter\" arXiv preprint arXiv:2005.07503 (2020)."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #Twitter #COVID-19 #en #license-mit #endpoints_compatible #region-us \n",
"# COVID-Twitter-BERT (CT-BERT) v1\n\n:warning: _You may want to use the v2 model which was trained on more recent data and yields better performance_ :warning: \n\n\nBERT-large-uncased model, pretrained on a corpus of messages from Twitter about COVID-19. Find more info on our GitHub page.",
"## Overview\nThis model was trained on 160M tweets collected between January 12 and April 16, 2020 containing at least one of the keywords \"wuhan\", \"ncov\", \"coronavirus\", \"covid\", or \"sars-cov-2\". These tweets were filtered and preprocessed to reach a final sample of 22.5M tweets (containing 40.7M sentences and 633M tokens) which were used for training.\n\nThis model was evaluated based on downstream classification tasks, but it could be used for any other NLP task which can leverage contextual embeddings. \n\nIn order to achieve best results, make sure to use the same text preprocessing as we did for pretraining. This involves replacing user mentions, urls and emojis. You can find a script on our projects GitHub repo.",
"## Example usage\n\n\nYou can also use the model with the 'pipeline' interface:",
"## References\n[1] Martin Müller, Marcel Salaté, Per E Kummervold. \"COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter\" arXiv preprint arXiv:2005.07503 (2020)."
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-AdventureTime
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 279 | 3.3451 |
| 3.4534 | 2.0 | 558 | 3.2941 |
| 3.4534 | 3.0 | 837 | 3.2740 |
| 3.2435 | 4.0 | 1116 | 3.2617 |
| 3.2435 | 5.0 | 1395 | 3.2556 |
| 3.1729 | 6.0 | 1674 | 3.2490 |
| 3.1729 | 7.0 | 1953 | 3.2475 |
| 3.1262 | 8.0 | 2232 | 3.2467 |
| 3.0972 | 9.0 | 2511 | 3.2448 |
| 3.0972 | 10.0 | 2790 | 3.2450 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilgpt2-finetuned-AT", "results": []}]}
|
pyordii/distilgpt2-finetuned-AT
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
distilgpt2-finetuned-AdventureTime
==================================
This model is a fine-tuned version of distilgpt2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 3.2450
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
fBERT: A Neural Transformer for Identifying Offensive Content [Accepted at EMNLP 2021]
Authors: Diptanu Sarkar, Marcos Zampieri, Tharindu Ranasinghe and Alexander Ororbia
About:
Transformer-based models such as BERT, ELMO, and XLM-R have achieved state-of-the-art performance across various NLP tasks including the identification of offensive language and hate speech, an important problem in social media. Previous studies have shown that domain-specific fine-tuning or retraining of models before attempting to solve downstream tasks can lead to excellent results in multiple domains. Fine-tuning/retraining a complex models to identify offensive language has not been substantially explored before and we address this gap by proposing fBERT, a bert-base-uncased model that has been learned using over 1.4 million offensive instances from the SOLID dataset. The shifted fBERT model better incorporates domain-specific offensive language and social media features. The fBERT model achieves better results in both OffensEval and HatEval tasks and in the HS & O dataset over BERT and HateBERT.
|
{}
|
diptanu/fBERT
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
fBERT: A Neural Transformer for Identifying Offensive Content [Accepted at EMNLP 2021]
Authors: Diptanu Sarkar, Marcos Zampieri, Tharindu Ranasinghe and Alexander Ororbia
About:
Transformer-based models such as BERT, ELMO, and XLM-R have achieved state-of-the-art performance across various NLP tasks including the identification of offensive language and hate speech, an important problem in social media. Previous studies have shown that domain-specific fine-tuning or retraining of models before attempting to solve downstream tasks can lead to excellent results in multiple domains. Fine-tuning/retraining a complex models to identify offensive language has not been substantially explored before and we address this gap by proposing fBERT, a bert-base-uncased model that has been learned using over 1.4 million offensive instances from the SOLID dataset. The shifted fBERT model better incorporates domain-specific offensive language and social media features. The fBERT model achieves better results in both OffensEval and HatEval tasks and in the HS & O dataset over BERT and HateBERT.
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# Moe DialoGPT Model
|
{"tags": ["conversational"]}
|
disdamoe/DialoGPT-small-moe
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Moe DialoGPT Model
|
[
"# Moe DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Moe DialoGPT Model"
] |
text-generation
|
transformers
|
# Moe DialoGPT Model
|
{"tags": ["conversational"]}
|
disdamoe/TheGreatManipulator
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Moe DialoGPT Model
|
[
"# Moe DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Moe DialoGPT Model"
] |
text-generation
|
transformers
|
# The Manipulator
|
{"tags": ["conversational"]}
|
disdamoe/TheManipulator
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# The Manipulator
|
[
"# The Manipulator"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# The Manipulator"
] |
null | null |
<a href="https://www.geogebra.org/m/w8uzjttg">.</a>
<a href="https://www.geogebra.org/m/gvn7m78g">.</a>
<a href="https://www.geogebra.org/m/arxecanq">.</a>
<a href="https://www.geogebra.org/m/xb69bvww">.</a>
<a href="https://www.geogebra.org/m/apvepfnd">.</a>
<a href="https://www.geogebra.org/m/evmj8ckk">.</a>
<a href="https://www.geogebra.org/m/qxcxwmhp">.</a>
<a href="https://www.geogebra.org/m/p3cxqh6c">.</a>
<a href="https://www.geogebra.org/m/ggrahbgd">.</a>
<a href="https://www.geogebra.org/m/pnhymrbc">.</a>
<a href="https://www.geogebra.org/m/zjukbtk9">.</a>
<a href="https://www.geogebra.org/m/bbezun8r">.</a>
<a href="https://www.geogebra.org/m/sgwamtru">.</a>
<a href="https://www.geogebra.org/m/fpunkxxp">.</a>
<a href="https://www.geogebra.org/m/acxebrr7">.</a>
<a href="https://jobs.acm.org/jobs/watch-godzilla-vs-kong-2021-full-1818658-cd">.</a>
<a href="https://jobs.acm.org/jobs/123movies-watch-godzilla-vs-kong-online-2021-full-f-r-e-e-1818655-cd">.</a>
<a href="https://jobs.acm.org/jobs/watch-demon-slayer-kimetsu-no-yaiba-mugen-train-2020-f-u-l-l-f-r-e-e-1818661-cd">.</a>
<a href="https://jobs.acm.org/jobs/123movies-watch-zack-snyder-s-justice-league-online-2021-full-f-r-e-e-1818662-cd">.</a>
<a href="https://jobs.acm.org/jobs/hd-watch-godzilla-vs-kong-2021-version-full-hbomax-1818659-cd">.</a>
<a href="https://jobs.acm.org/jobs/123movies-watch-girl-in-the-basement-online-2021-full-f-r-e-e-1818663-cd">.</a>
<a href="https://jobs.acm.org/jobs/watch-godzilla-vs-kong-2021-f-u-l-l-h-d-1818660-cd">.</a>
<a href="https://jobs.acm.org/jobs/123movies-watch-billie-eilish-the-world-s-a-little-blurry-2021-f-u-l-l-f-r-e-e-1818666-cd">.</a>
<a href="https://jobs.acm.org/jobs/123movies-watch-monster-hunter-2020-f-u-l-l-f-r-e-e-1818667-cd">.</a>
<a href="https://jobs.acm.org/jobs/123movies-watch-raya-and-the-last-dragon-2021-f-u-l-l-f-r-e-e-1818669-cd">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-365-days-2020-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-billie-eilish-the-worlds-a-little-blurry-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-cherry-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-coming-2-america-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-demon-slayer-kimetsu-no-yaiba-mugen-train-2020-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-godzilla-vs-kong-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-judas-and-the-black-messiah-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-monster-hunter-2020-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-mortal-kombat-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-raya-and-the-last-dragon-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-tenet-2020-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-the-world-to-come-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-tom-and-jerry-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-willys-wonderland-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-wonder-woman-1984-2020-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-wrong-turn-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-zack-snyders-justice-league-2021-hd-online-full-free-stream-2/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-a-writers-odyssey-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-the-marksman-2021-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-after-we-collided-2020-version-full-online-free/">.</a>
<a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-watch-full/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online-full-version-123movies/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full/">.</a>
<a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-2/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-3/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-4/">.</a>
<a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full/">.</a>
<a href="https://pactforanimals.org/advert/full-watch-123movies-godzilla-vs-kong-2021/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-free-hd/">.</a>
<a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free-online/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-5/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online-full-version-hd/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-full-2021-free/">.</a>
<a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-2/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-6/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-7/">.</a>
<a href="https://pactforanimals.org/advert/free-download-godzilla-vs-kong-2021-watch-full/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online/">.</a>
<a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-online/">.</a>
<a href="https://pactforanimals.org/advert/godzilla-vs-kong-2021-google-drive-mp4/">.</a>
<a href="https://pactforanimals.org/advert/google-docs-godzilla-vs-kong-2021-google-drive-full-hd-mp4/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-8/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-9/">.</a>
<a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-3/">.</a>
<a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-online/">.</a>
<a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-4/">.</a>
<a href="https://pactforanimals.org/advert/free-godzilla-vs-kong-2021-watch-full/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-10/">.</a>
<a href="https://pactforanimals.org/advert/online-watch-godzilla-vs-kong-2021-full/">.</a>
<a href="https://pactforanimals.org/advert/123movies-watch-godzilla-vs-kong-2021-full-online/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-11/">.</a>
<a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free-hd/">.</a>
<a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-free-online/">.</a>
<a href="https://pactforanimals.org/advert/full-godzilla-vs-kong-2021-watch-online/">.</a>
<a href="https://sites.google.com/view/mortalkombat1/">.</a>
<a href="https://sites.google.com/view/free-watch-mortal-kombat-2021-/">.</a>
<a href="https://sites.google.com/view/watch-mortal-kombat-2021-f-u-l/">.</a>
<a href="https://sites.google.com/view/mortalkombat2/">.</a>
<a href="https://sites.google.com/view/mortalkombat3/">.</a>
<a href="https://sites.google.com/view/mortalkombat5/">.</a>
<a href="https://sites.google.com/view/fullwatchmortalkombat2021-movi/">.</a>
<a href="https://sites.google.com/view/mortalkombat7/">.</a>
<a href="https://sites.google.com/view/mortalkombat8/">.</a>
<a href="https://sites.google.com/view/mortalkombat9/">.</a>
<a href="https://sites.google.com/view/mortalkombat10/">.</a>
<a href="https://sites.google.com/view/watch-mort-tal-kombat/">.</a>
<a href="https://sites.google.com/view/free-watch-mort-tal-kombat/">.</a>
<a href="https://sites.google.com/view/watch-mort-tal-kombatfree-/">.</a>
<a href="https://sites.google.com/view/full-watch-mortal-kombat/">.</a>
<a href="https://sites.google.com/view/watch-mortal-kombat-2021-/">.</a>
<a href="https://sites.google.com/view/watch-free-mortal-kombat-2021/">.</a>
<a href="https://sites.google.com/view/full-watch-mortal-kombat-/">.</a>
<a href="https://sites.google.com/view/watch-mortal-kombat-g-drive/">.</a>
<a href="https://sites.google.com/view/g-docs-mortalkombat-g-drive/">.</a>
<a href="https://sites.google.com/view/mortal-kombat-2021-full-free/">.</a>
<a href="https://sites.google.com/view/mortal-kombat-2021-full-free-o/">.</a>
<a href="https://sites.google.com/view/mortal-kombat-2021-full-free-o/">.</a>
<a href="https://paiza.io/projects/56xFAEq61pSSn8VnKnHO6Q">.</a>
<a href="https://www.posts123.com/post/1450667/mariners-announce-spring-training">.</a>
<a href="https://sites.google.com/view/sfdjgkdfghdkfgjherghkkdfjg/home">.</a>
<a href="https://dskfjshdkjfewhgf.blogspot.com/2021/03/sdkjfhwekjhfjdherjgfdjg.html">.</a>
<a href="https://grahmaulidia.wordpress.com/2021/03/28/mariners-announce-spring-training-roster-moves/">.</a>
<a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner-f83a9ea92f89">.</a>
<a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner1-b2847091ff9f">.</a>
<a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner2-df35041eec3a">.</a>
<a href="https://4z5v6wq7a.medium.com">.</a>
<a href="https://onlinegdb.com/BJaH8WR4O">.</a>
|
{}
|
dispenst/hgfytgfg
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL
<a href="URL">.</a>
<a href="URL
|
[] |
[
"TAGS\n#region-us \n"
] |
automatic-speech-recognition
|
transformers
|
We took `facebook/wav2vec2-large-960h` and fine tuned it using 1400 audio clips (around 10-15 seconds each) from various cryptocurrency related podcasts. To label the data, we downloaded cryptocurrency podcasts from youtube with their subtitle data and split the clips up by sentence. We then compared the youtube transcription with `facebook/wav2vec2-large-960h` to correct many mistakes in the youtube transcriptions. We can probably achieve better results with more data clean up.
On our data we achieved a WER of 13.1%. `facebook/wav2vec2-large-960h` only reached a WER of 27% on our data.
## Usage
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("distractedm1nd/wav2vec-en-finetuned-on-cryptocurrency")
model = Wav2Vec2ForCTC.from_pretrained("distractedm1nd/wav2vec-en-finetuned-on-cryptocurrency"
filename = "INSERT_FILENAME"
audio, sampling_rate = sf.read(filename)
input_values = processor(audio, return_tensors="pt", padding="longest", sampling_rate=sampling_rate).input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
tokenizer.batch_decode(predicted_ids
```
|
{"language": "en", "license": "mit", "tags": ["audio", "automatic-speech-recognition"], "metrics": ["wer"]}
|
distractedm1nd/wav2vec-en-finetuned-on-cryptocurrency
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #en #license-mit #endpoints_compatible #region-us
|
We took 'facebook/wav2vec2-large-960h' and fine tuned it using 1400 audio clips (around 10-15 seconds each) from various cryptocurrency related podcasts. To label the data, we downloaded cryptocurrency podcasts from youtube with their subtitle data and split the clips up by sentence. We then compared the youtube transcription with 'facebook/wav2vec2-large-960h' to correct many mistakes in the youtube transcriptions. We can probably achieve better results with more data clean up.
On our data we achieved a WER of 13.1%. 'facebook/wav2vec2-large-960h' only reached a WER of 27% on our data.
## Usage
|
[
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #en #license-mit #endpoints_compatible #region-us \n",
"## Usage"
] |
text-generation
| null |
# Peter from Your Boyfriend Game.
|
{"tags": ["conversational"]}
|
divi/Peterbot
| null |
[
"conversational",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#conversational #region-us
|
# Peter from Your Boyfriend Game.
|
[
"# Peter from Your Boyfriend Game."
] |
[
"TAGS\n#conversational #region-us \n",
"# Peter from Your Boyfriend Game."
] |
text-classification
|
transformers
|
# diwank/dyda-deberta-pair
Deberta-based Daily Dialog style dialog-act annotations classification model. It takes two sentences as inputs (one previous and one current of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. Outputs one of four labels (exactly as in the [daily-dialog dataset](https://huggingface.co/datasets/daily_dialog) ): *__dummy__ (0), inform (1), question (2), directive (3), commissive (4)*
## Usage
```python
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
model = ClassificationModel("deberta", "diwank/dyda-deberta-pair")
convert_to_label = lambda n: ["__dummy__ (0), inform (1), question (2), directive (3), commissive (4)".split(', ')[i] for i in n]
predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]])
convert_to_label(predictions) # inform (1)
```
|
{"license": "mit"}
|
diwank/dyda-deberta-pair
| null |
[
"transformers",
"pytorch",
"tf",
"deberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #deberta #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# diwank/dyda-deberta-pair
Deberta-based Daily Dialog style dialog-act annotations classification model. It takes two sentences as inputs (one previous and one current of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. Outputs one of four labels (exactly as in the daily-dialog dataset ): *__dummy__ (0), inform (1), question (2), directive (3), commissive (4)*
## Usage
|
[
"# diwank/dyda-deberta-pair\r\n\r\nDeberta-based Daily Dialog style dialog-act annotations classification model. It takes two sentences as inputs (one previous and one current of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. Outputs one of four labels (exactly as in the daily-dialog dataset ): *__dummy__ (0), inform (1), question (2), directive (3), commissive (4)*",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #tf #deberta #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# diwank/dyda-deberta-pair\r\n\r\nDeberta-based Daily Dialog style dialog-act annotations classification model. It takes two sentences as inputs (one previous and one current of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. Outputs one of four labels (exactly as in the daily-dialog dataset ): *__dummy__ (0), inform (1), question (2), directive (3), commissive (4)*",
"## Usage"
] |
text-classification
|
transformers
|
# maptask-deberta-pair
Deberta-based Daily MapTask style dialog-act annotations classification model
## Example
```python
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
model = ClassificationModel("deberta", "diwank/maptask-deberta-pair")
predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]])
convert_to_label = lambda n: ["acknowledge (0), align (1), check (2), clarify (3), explain (4), instruct (5), query_w (6), query_yn (7), ready (8), reply_n (9), reply_w (10), reply_y (11)".split(', ')[i] for i in n]
convert_to_label(predictions) # reply_n (9)
```
|
{"license": "mit"}
|
diwank/maptask-deberta-pair
| null |
[
"transformers",
"pytorch",
"tf",
"deberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #deberta #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# maptask-deberta-pair
Deberta-based Daily MapTask style dialog-act annotations classification model
## Example
|
[
"# maptask-deberta-pair\r\nDeberta-based Daily MapTask style dialog-act annotations classification model",
"## Example"
] |
[
"TAGS\n#transformers #pytorch #tf #deberta #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# maptask-deberta-pair\r\nDeberta-based Daily MapTask style dialog-act annotations classification model",
"## Example"
] |
text-classification
|
transformers
|
# diwank/silicone-deberta-pair
`deberta-base`-based dialog acts classifier. Trained on the `balanced` variant of the [silicone-merged](https://huggingface.co/datasets/diwank/silicone-merged) dataset: a simplified merged dialog act data from datasets in the [silicone](https://huggingface.co/datasets/silicone) collection.
Takes two sentences as inputs (one previous and one current utterance of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. **Outputs one of 11 labels**:
```python
(0, 'acknowledge')
(1, 'answer')
(2, 'backchannel')
(3, 'reply_yes')
(4, 'exclaim')
(5, 'say')
(6, 'reply_no')
(7, 'hold')
(8, 'ask')
(9, 'intent')
(10, 'ask_yes_no')
```
## Example:
```python
from simpletransformers.classification import (
ClassificationModel, ClassificationArgs
)
model = ClassificationModel("deberta", "diwank/silicone-deberta-pair")
convert_to_label = lambda n: [
['acknowledge',
'answer',
'backchannel',
'reply_yes',
'exclaim',
'say',
'reply_no',
'hold',
'ask',
'intent',
'ask_yes_no'
][i] for i in n
]
predictions, raw_outputs = model.predict([["Say what is the meaning of life?", "I dont know"]])
convert_to_label(predictions) # answer
```
## Report from W&B
https://wandb.ai/diwank/da-silicone-combined/reports/silicone-deberta-pair--VmlldzoxNTczNjE5?accessToken=yj1jz4c365z0y5b3olgzye7qgsl7qv9lxvqhmfhtb6300hql6veqa5xiq1skn8ys
|
{"license": "mit"}
|
diwank/silicone-deberta-pair
| null |
[
"transformers",
"pytorch",
"tf",
"deberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #deberta #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# diwank/silicone-deberta-pair
'deberta-base'-based dialog acts classifier. Trained on the 'balanced' variant of the silicone-merged dataset: a simplified merged dialog act data from datasets in the silicone collection.
Takes two sentences as inputs (one previous and one current utterance of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. Outputs one of 11 labels:
## Example:
## Report from W&B
URL
|
[
"# diwank/silicone-deberta-pair\r\n\r\n'deberta-base'-based dialog acts classifier. Trained on the 'balanced' variant of the silicone-merged dataset: a simplified merged dialog act data from datasets in the silicone collection. \r\n\r\nTakes two sentences as inputs (one previous and one current utterance of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. Outputs one of 11 labels:",
"## Example:",
"## Report from W&B\r\n\r\nURL"
] |
[
"TAGS\n#transformers #pytorch #tf #deberta #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# diwank/silicone-deberta-pair\r\n\r\n'deberta-base'-based dialog acts classifier. Trained on the 'balanced' variant of the silicone-merged dataset: a simplified merged dialog act data from datasets in the silicone collection. \r\n\r\nTakes two sentences as inputs (one previous and one current utterance of a dialog). The previous sentence can be an empty string if this is the first utterance of a speaker in a dialog. Outputs one of 11 labels:",
"## Example:",
"## Report from W&B\r\n\r\nURL"
] |
null |
transformers
|
Slavic BERT from https://github.com/deepmipt/Slavic-BERT-NER http://files.deeppavlov.ai/deeppavlov_data/bg_cs_pl_ru_cased_L-12_H-768_A-12.tar.gz
|
{}
|
djstrong/bg_cs_pl_ru_cased_L-12_H-768_A-12
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #endpoints_compatible #region-us
|
Slavic BERT from URL URL
|
[] |
[
"TAGS\n#transformers #pytorch #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
dk16gaming/DialoGPT-small-HarryPotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
text-classification
|
transformers
|
### Bert-News
|
{}
|
dkhara/bert-news
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
### Bert-News
|
[
"### Bert-News"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"### Bert-News"
] |
null |
transformers
|
# Polbert - Polish BERT
Polish version of BERT language model is here! It is now available in two variants: cased and uncased, both can be downloaded and used via HuggingFace transformers library. I recommend using the cased model, more info on the differences and benchmark results below.

## Cased and uncased variants
* I initially trained the uncased model, the corpus and training details are referenced below. Here are some issues I found after I published the uncased model:
* Some Polish characters and accents are not tokenized correctly through the BERT tokenizer when applying lowercase. This doesn't impact sequence classification much, but may influence token classfication tasks significantly.
* I noticed a lot of duplicates in the Open Subtitles dataset, which dominates the training corpus.
* I didn't use Whole Word Masking.
* The cased model improves on the uncased model in the following ways:
* All Polish characters and accents should now be tokenized correctly.
* I removed duplicates from Open Subtitles dataset. The corpus is smaller, but more balanced now.
* The model is trained with Whole Word Masking.
## Pre-training corpora
Below is the list of corpora used along with the output of `wc` command (counting lines, words and characters). These corpora were divided into sentences with srxsegmenter (see references), concatenated and tokenized with HuggingFace BERT Tokenizer.
### Uncased
| Tables | Lines | Words | Characters |
| ------------- |--------------:| -----:| -----:|
| [Polish subset of Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 236635408| 1431199601 | 7628097730 |
| [Polish subset of ParaCrawl](http://opus.nlpl.eu/ParaCrawl.php) | 8470950 | 176670885 | 1163505275 |
| [Polish Parliamentary Corpus](http://clip.ipipan.waw.pl/PPC) | 9799859 | 121154785 | 938896963 |
| [Polish Wikipedia - Feb 2020](https://dumps.wikimedia.org/plwiki/latest/plwiki-latest-pages-articles.xml.bz2) | 8014206 | 132067986 | 1015849191 |
| Total | 262920423 | 1861093257 | 10746349159 |
### Cased
| Tables | Lines | Words | Characters |
| ------------- |--------------:| -----:| -----:|
| [Polish subset of Open Subtitles (Deduplicated) ](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 41998942| 213590656 | 1424873235 |
| [Polish subset of ParaCrawl](http://opus.nlpl.eu/ParaCrawl.php) | 8470950 | 176670885 | 1163505275 |
| [Polish Parliamentary Corpus](http://clip.ipipan.waw.pl/PPC) | 9799859 | 121154785 | 938896963 |
| [Polish Wikipedia - Feb 2020](https://dumps.wikimedia.org/plwiki/latest/plwiki-latest-pages-articles.xml.bz2) | 8014206 | 132067986 | 1015849191 |
| Total | 68283960 | 646479197 | 4543124667 |
## Pre-training details
### Uncased
* Polbert was trained with code provided in Google BERT's github repository (https://github.com/google-research/bert)
* Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)
* Training set-up: in total 1 million training steps:
* 100.000 steps - 128 sequence length, batch size 512, learning rate 1e-4 (10.000 steps warmup)
* 800.000 steps - 128 sequence length, batch size 512, learning rate 5e-5
* 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5
* The model was trained on a single Google Cloud TPU v3-8
### Cased
* Same approach as uncased model, with the following differences:
* Whole Word Masking
* Training set-up:
* 100.000 steps - 128 sequence length, batch size 2048, learning rate 1e-4 (10.000 steps warmup)
* 100.000 steps - 128 sequence length, batch size 2048, learning rate 5e-5
* 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5
## Usage
Polbert is released via [HuggingFace Transformers library](https://huggingface.co/transformers/).
For an example use as language model, see [this notebook](/LM_testing.ipynb) file.
### Uncased
```python
from transformers import *
model = BertForMaskedLM.from_pretrained("dkleczek/bert-base-polish-uncased-v1")
tokenizer = BertTokenizer.from_pretrained("dkleczek/bert-base-polish-uncased-v1")
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"Adam Mickiewicz wielkim polskim {nlp.tokenizer.mask_token} był."):
print(pred)
# Output:
# {'sequence': '[CLS] adam mickiewicz wielkim polskim poeta był. [SEP]', 'score': 0.47196975350379944, 'token': 26596}
# {'sequence': '[CLS] adam mickiewicz wielkim polskim bohaterem był. [SEP]', 'score': 0.09127858281135559, 'token': 10953}
# {'sequence': '[CLS] adam mickiewicz wielkim polskim człowiekiem był. [SEP]', 'score': 0.0647173821926117, 'token': 5182}
# {'sequence': '[CLS] adam mickiewicz wielkim polskim pisarzem był. [SEP]', 'score': 0.05232388526201248, 'token': 24293}
# {'sequence': '[CLS] adam mickiewicz wielkim polskim politykiem był. [SEP]', 'score': 0.04554257541894913, 'token': 44095}
```
### Cased
```python
model = BertForMaskedLM.from_pretrained("dkleczek/bert-base-polish-cased-v1")
tokenizer = BertTokenizer.from_pretrained("dkleczek/bert-base-polish-cased-v1")
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"Adam Mickiewicz wielkim polskim {nlp.tokenizer.mask_token} był."):
print(pred)
# Output:
# {'sequence': '[CLS] Adam Mickiewicz wielkim polskim pisarzem był. [SEP]', 'score': 0.5391148328781128, 'token': 37120}
# {'sequence': '[CLS] Adam Mickiewicz wielkim polskim człowiekiem był. [SEP]', 'score': 0.11683262139558792, 'token': 6810}
# {'sequence': '[CLS] Adam Mickiewicz wielkim polskim bohaterem był. [SEP]', 'score': 0.06021466106176376, 'token': 17709}
# {'sequence': '[CLS] Adam Mickiewicz wielkim polskim mistrzem był. [SEP]', 'score': 0.051870670169591904, 'token': 14652}
# {'sequence': '[CLS] Adam Mickiewicz wielkim polskim artystą był. [SEP]', 'score': 0.031787533313035965, 'token': 35680}
```
See the next section for an example usage of Polbert in downstream tasks.
## Evaluation
Thanks to Allegro, we now have the [KLEJ benchmark](https://klejbenchmark.com/leaderboard/), a set of nine evaluation tasks for the Polish language understanding. The following results are achieved by running standard set of evaluation scripts (no tricks!) utilizing both cased and uncased variants of Polbert.
| Model | Average | NKJP-NER | CDSC-E | CDSC-R | CBD | PolEmo2.0-IN | PolEmo2.0-OUT | DYK | PSC | AR |
| ------------- |--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| Polbert cased | 81.7 | 93.6 | 93.4 | 93.8 | 52.7 | 87.4 | 71.1 | 59.1 | 98.6 | 85.2 |
| Polbert uncased | 81.4 | 90.1 | 93.9 | 93.5 | 55.0 | 88.1 | 68.8 | 59.4 | 98.8 | 85.4 |
Note how the uncased model performs better than cased on some tasks? My guess this is because of the oversampling of Open Subtitles dataset and its similarity to data in some of these tasks. All these benchmark tasks are sequence classification, so the relative strength of the cased model is not so visible here.
## Bias
The data used to train the model is biased. It may reflect stereotypes related to gender, ethnicity etc. Please be careful when using the model for downstream task to consider these biases and mitigate them.
## Acknowledgements
* I'd like to express my gratitude to Google [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) for providing the free TPU credits - thank you!
* Also appreciate the help from Timo Möller from [deepset](https://deepset.ai) for sharing tips and scripts based on their experience training German BERT model.
* Big thanks to Allegro for releasing KLEJ Benchmark and specifically to Piotr Rybak for help with the evaluation and pointing out some issues with the tokenization.
* Finally, thanks to Rachel Thomas, Jeremy Howard and Sylvain Gugger from [fastai](https://www.fast.ai) for their NLP and Deep Learning courses!
## Author
Darek Kłeczek - contact me on Twitter [@dk21](https://twitter.com/dk21)
## References
* https://github.com/google-research/bert
* https://github.com/narusemotoki/srx_segmenter
* SRX rules file for sentence splitting in Polish, written by Marcin Miłkowski: https://raw.githubusercontent.com/languagetool-org/languagetool/master/languagetool-core/src/main/resources/org/languagetool/resource/segment.srx
* [KLEJ benchmark](https://klejbenchmark.com/leaderboard/)
|
{"language": "pl", "thumbnail": "https://raw.githubusercontent.com/kldarek/polbert/master/img/polbert.png"}
|
dkleczek/bert-base-polish-cased-v1
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"pretraining",
"pl",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pl"
] |
TAGS
#transformers #pytorch #jax #bert #pretraining #pl #endpoints_compatible #has_space #region-us
|
Polbert - Polish BERT
=====================
Polish version of BERT language model is here! It is now available in two variants: cased and uncased, both can be downloaded and used via HuggingFace transformers library. I recommend using the cased model, more info on the differences and benchmark results below.
!PolBERT image
Cased and uncased variants
--------------------------
* I initially trained the uncased model, the corpus and training details are referenced below. Here are some issues I found after I published the uncased model:
+ Some Polish characters and accents are not tokenized correctly through the BERT tokenizer when applying lowercase. This doesn't impact sequence classification much, but may influence token classfication tasks significantly.
+ I noticed a lot of duplicates in the Open Subtitles dataset, which dominates the training corpus.
+ I didn't use Whole Word Masking.
* The cased model improves on the uncased model in the following ways:
+ All Polish characters and accents should now be tokenized correctly.
+ I removed duplicates from Open Subtitles dataset. The corpus is smaller, but more balanced now.
+ The model is trained with Whole Word Masking.
Pre-training corpora
--------------------
Below is the list of corpora used along with the output of 'wc' command (counting lines, words and characters). These corpora were divided into sentences with srxsegmenter (see references), concatenated and tokenized with HuggingFace BERT Tokenizer.
### Uncased
### Cased
Pre-training details
--------------------
### Uncased
* Polbert was trained with code provided in Google BERT's github repository (URL
* Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)
* Training set-up: in total 1 million training steps:
+ 100.000 steps - 128 sequence length, batch size 512, learning rate 1e-4 (10.000 steps warmup)
+ 800.000 steps - 128 sequence length, batch size 512, learning rate 5e-5
+ 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5
* The model was trained on a single Google Cloud TPU v3-8
### Cased
* Same approach as uncased model, with the following differences:
+ Whole Word Masking
* Training set-up:
+ 100.000 steps - 128 sequence length, batch size 2048, learning rate 1e-4 (10.000 steps warmup)
+ 100.000 steps - 128 sequence length, batch size 2048, learning rate 5e-5
+ 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5
Usage
-----
Polbert is released via HuggingFace Transformers library.
For an example use as language model, see this notebook file.
### Uncased
### Cased
See the next section for an example usage of Polbert in downstream tasks.
Evaluation
----------
Thanks to Allegro, we now have the KLEJ benchmark, a set of nine evaluation tasks for the Polish language understanding. The following results are achieved by running standard set of evaluation scripts (no tricks!) utilizing both cased and uncased variants of Polbert.
Note how the uncased model performs better than cased on some tasks? My guess this is because of the oversampling of Open Subtitles dataset and its similarity to data in some of these tasks. All these benchmark tasks are sequence classification, so the relative strength of the cased model is not so visible here.
Bias
----
The data used to train the model is biased. It may reflect stereotypes related to gender, ethnicity etc. Please be careful when using the model for downstream task to consider these biases and mitigate them.
Acknowledgements
----------------
* I'd like to express my gratitude to Google TensorFlow Research Cloud (TFRC) for providing the free TPU credits - thank you!
* Also appreciate the help from Timo Möller from deepset for sharing tips and scripts based on their experience training German BERT model.
* Big thanks to Allegro for releasing KLEJ Benchmark and specifically to Piotr Rybak for help with the evaluation and pointing out some issues with the tokenization.
* Finally, thanks to Rachel Thomas, Jeremy Howard and Sylvain Gugger from fastai for their NLP and Deep Learning courses!
Author
------
Darek Kłeczek - contact me on Twitter @dk21
References
----------
* URL
* URL
* SRX rules file for sentence splitting in Polish, written by Marcin Miłkowski: URL
* KLEJ benchmark
|
[
"### Uncased",
"### Cased\n\n\n\nPre-training details\n--------------------",
"### Uncased\n\n\n* Polbert was trained with code provided in Google BERT's github repository (URL\n* Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)\n* Training set-up: in total 1 million training steps:\n\t+ 100.000 steps - 128 sequence length, batch size 512, learning rate 1e-4 (10.000 steps warmup)\n\t+ 800.000 steps - 128 sequence length, batch size 512, learning rate 5e-5\n\t+ 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5\n* The model was trained on a single Google Cloud TPU v3-8",
"### Cased\n\n\n* Same approach as uncased model, with the following differences:\n\t+ Whole Word Masking\n* Training set-up:\n\t+ 100.000 steps - 128 sequence length, batch size 2048, learning rate 1e-4 (10.000 steps warmup)\n\t+ 100.000 steps - 128 sequence length, batch size 2048, learning rate 5e-5\n\t+ 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5\n\n\nUsage\n-----\n\n\nPolbert is released via HuggingFace Transformers library.\n\n\nFor an example use as language model, see this notebook file.",
"### Uncased",
"### Cased\n\n\nSee the next section for an example usage of Polbert in downstream tasks.\n\n\nEvaluation\n----------\n\n\nThanks to Allegro, we now have the KLEJ benchmark, a set of nine evaluation tasks for the Polish language understanding. The following results are achieved by running standard set of evaluation scripts (no tricks!) utilizing both cased and uncased variants of Polbert.\n\n\n\nNote how the uncased model performs better than cased on some tasks? My guess this is because of the oversampling of Open Subtitles dataset and its similarity to data in some of these tasks. All these benchmark tasks are sequence classification, so the relative strength of the cased model is not so visible here.\n\n\nBias\n----\n\n\nThe data used to train the model is biased. It may reflect stereotypes related to gender, ethnicity etc. Please be careful when using the model for downstream task to consider these biases and mitigate them.\n\n\nAcknowledgements\n----------------\n\n\n* I'd like to express my gratitude to Google TensorFlow Research Cloud (TFRC) for providing the free TPU credits - thank you!\n* Also appreciate the help from Timo Möller from deepset for sharing tips and scripts based on their experience training German BERT model.\n* Big thanks to Allegro for releasing KLEJ Benchmark and specifically to Piotr Rybak for help with the evaluation and pointing out some issues with the tokenization.\n* Finally, thanks to Rachel Thomas, Jeremy Howard and Sylvain Gugger from fastai for their NLP and Deep Learning courses!\n\n\nAuthor\n------\n\n\nDarek Kłeczek - contact me on Twitter @dk21\n\n\nReferences\n----------\n\n\n* URL\n* URL\n* SRX rules file for sentence splitting in Polish, written by Marcin Miłkowski: URL\n* KLEJ benchmark"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #pretraining #pl #endpoints_compatible #has_space #region-us \n",
"### Uncased",
"### Cased\n\n\n\nPre-training details\n--------------------",
"### Uncased\n\n\n* Polbert was trained with code provided in Google BERT's github repository (URL\n* Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)\n* Training set-up: in total 1 million training steps:\n\t+ 100.000 steps - 128 sequence length, batch size 512, learning rate 1e-4 (10.000 steps warmup)\n\t+ 800.000 steps - 128 sequence length, batch size 512, learning rate 5e-5\n\t+ 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5\n* The model was trained on a single Google Cloud TPU v3-8",
"### Cased\n\n\n* Same approach as uncased model, with the following differences:\n\t+ Whole Word Masking\n* Training set-up:\n\t+ 100.000 steps - 128 sequence length, batch size 2048, learning rate 1e-4 (10.000 steps warmup)\n\t+ 100.000 steps - 128 sequence length, batch size 2048, learning rate 5e-5\n\t+ 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5\n\n\nUsage\n-----\n\n\nPolbert is released via HuggingFace Transformers library.\n\n\nFor an example use as language model, see this notebook file.",
"### Uncased",
"### Cased\n\n\nSee the next section for an example usage of Polbert in downstream tasks.\n\n\nEvaluation\n----------\n\n\nThanks to Allegro, we now have the KLEJ benchmark, a set of nine evaluation tasks for the Polish language understanding. The following results are achieved by running standard set of evaluation scripts (no tricks!) utilizing both cased and uncased variants of Polbert.\n\n\n\nNote how the uncased model performs better than cased on some tasks? My guess this is because of the oversampling of Open Subtitles dataset and its similarity to data in some of these tasks. All these benchmark tasks are sequence classification, so the relative strength of the cased model is not so visible here.\n\n\nBias\n----\n\n\nThe data used to train the model is biased. It may reflect stereotypes related to gender, ethnicity etc. Please be careful when using the model for downstream task to consider these biases and mitigate them.\n\n\nAcknowledgements\n----------------\n\n\n* I'd like to express my gratitude to Google TensorFlow Research Cloud (TFRC) for providing the free TPU credits - thank you!\n* Also appreciate the help from Timo Möller from deepset for sharing tips and scripts based on their experience training German BERT model.\n* Big thanks to Allegro for releasing KLEJ Benchmark and specifically to Piotr Rybak for help with the evaluation and pointing out some issues with the tokenization.\n* Finally, thanks to Rachel Thomas, Jeremy Howard and Sylvain Gugger from fastai for their NLP and Deep Learning courses!\n\n\nAuthor\n------\n\n\nDarek Kłeczek - contact me on Twitter @dk21\n\n\nReferences\n----------\n\n\n* URL\n* URL\n* SRX rules file for sentence splitting in Polish, written by Marcin Miłkowski: URL\n* KLEJ benchmark"
] |
fill-mask
|
transformers
|
# Polbert - Polish BERT
Polish version of BERT language model is here! It is now available in two variants: cased and uncased, both can be downloaded and used via HuggingFace transformers library. I recommend using the cased model, more info on the differences and benchmark results below.

## Cased and uncased variants
* I initially trained the uncased model, the corpus and training details are referenced below. Here are some issues I found after I published the uncased model:
* Some Polish characters and accents are not tokenized correctly through the BERT tokenizer when applying lowercase. This doesn't impact sequence classification much, but may influence token classfication tasks significantly.
* I noticed a lot of duplicates in the Open Subtitles dataset, which dominates the training corpus.
* I didn't use Whole Word Masking.
* The cased model improves on the uncased model in the following ways:
* All Polish characters and accents should now be tokenized correctly.
* I removed duplicates from Open Subtitles dataset. The corpus is smaller, but more balanced now.
* The model is trained with Whole Word Masking.
## Pre-training corpora
Below is the list of corpora used along with the output of `wc` command (counting lines, words and characters). These corpora were divided into sentences with srxsegmenter (see references), concatenated and tokenized with HuggingFace BERT Tokenizer.
### Uncased
| Tables | Lines | Words | Characters |
| ------------- |--------------:| -----:| -----:|
| [Polish subset of Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 236635408| 1431199601 | 7628097730 |
| [Polish subset of ParaCrawl](http://opus.nlpl.eu/ParaCrawl.php) | 8470950 | 176670885 | 1163505275 |
| [Polish Parliamentary Corpus](http://clip.ipipan.waw.pl/PPC) | 9799859 | 121154785 | 938896963 |
| [Polish Wikipedia - Feb 2020](https://dumps.wikimedia.org/plwiki/latest/plwiki-latest-pages-articles.xml.bz2) | 8014206 | 132067986 | 1015849191 |
| Total | 262920423 | 1861093257 | 10746349159 |
### Cased
| Tables | Lines | Words | Characters |
| ------------- |--------------:| -----:| -----:|
| [Polish subset of Open Subtitles (Deduplicated) ](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 41998942| 213590656 | 1424873235 |
| [Polish subset of ParaCrawl](http://opus.nlpl.eu/ParaCrawl.php) | 8470950 | 176670885 | 1163505275 |
| [Polish Parliamentary Corpus](http://clip.ipipan.waw.pl/PPC) | 9799859 | 121154785 | 938896963 |
| [Polish Wikipedia - Feb 2020](https://dumps.wikimedia.org/plwiki/latest/plwiki-latest-pages-articles.xml.bz2) | 8014206 | 132067986 | 1015849191 |
| Total | 68283960 | 646479197 | 4543124667 |
## Pre-training details
### Uncased
* Polbert was trained with code provided in Google BERT's github repository (https://github.com/google-research/bert)
* Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)
* Training set-up: in total 1 million training steps:
* 100.000 steps - 128 sequence length, batch size 512, learning rate 1e-4 (10.000 steps warmup)
* 800.000 steps - 128 sequence length, batch size 512, learning rate 5e-5
* 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5
* The model was trained on a single Google Cloud TPU v3-8
### Cased
* Same approach as uncased model, with the following differences:
* Whole Word Masking
* Training set-up:
* 100.000 steps - 128 sequence length, batch size 2048, learning rate 1e-4 (10.000 steps warmup)
* 100.000 steps - 128 sequence length, batch size 2048, learning rate 5e-5
* 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5
## Usage
Polbert is released via [HuggingFace Transformers library](https://huggingface.co/transformers/).
For an example use as language model, see [this notebook](/LM_testing.ipynb) file.
### Uncased
```python
from transformers import *
model = BertForMaskedLM.from_pretrained("dkleczek/bert-base-polish-uncased-v1")
tokenizer = BertTokenizer.from_pretrained("dkleczek/bert-base-polish-uncased-v1")
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"Adam Mickiewicz wielkim polskim {nlp.tokenizer.mask_token} był."):
print(pred)
# Output:
# {'sequence': '[CLS] adam mickiewicz wielkim polskim poeta był. [SEP]', 'score': 0.47196975350379944, 'token': 26596}
# {'sequence': '[CLS] adam mickiewicz wielkim polskim bohaterem był. [SEP]', 'score': 0.09127858281135559, 'token': 10953}
# {'sequence': '[CLS] adam mickiewicz wielkim polskim człowiekiem był. [SEP]', 'score': 0.0647173821926117, 'token': 5182}
# {'sequence': '[CLS] adam mickiewicz wielkim polskim pisarzem był. [SEP]', 'score': 0.05232388526201248, 'token': 24293}
# {'sequence': '[CLS] adam mickiewicz wielkim polskim politykiem był. [SEP]', 'score': 0.04554257541894913, 'token': 44095}
```
### Cased
```python
model = BertForMaskedLM.from_pretrained("dkleczek/bert-base-polish-cased-v1")
tokenizer = BertTokenizer.from_pretrained("dkleczek/bert-base-polish-cased-v1")
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"Adam Mickiewicz wielkim polskim {nlp.tokenizer.mask_token} był."):
print(pred)
# Output:
# {'sequence': '[CLS] Adam Mickiewicz wielkim polskim pisarzem był. [SEP]', 'score': 0.5391148328781128, 'token': 37120}
# {'sequence': '[CLS] Adam Mickiewicz wielkim polskim człowiekiem był. [SEP]', 'score': 0.11683262139558792, 'token': 6810}
# {'sequence': '[CLS] Adam Mickiewicz wielkim polskim bohaterem był. [SEP]', 'score': 0.06021466106176376, 'token': 17709}
# {'sequence': '[CLS] Adam Mickiewicz wielkim polskim mistrzem był. [SEP]', 'score': 0.051870670169591904, 'token': 14652}
# {'sequence': '[CLS] Adam Mickiewicz wielkim polskim artystą był. [SEP]', 'score': 0.031787533313035965, 'token': 35680}
```
See the next section for an example usage of Polbert in downstream tasks.
## Evaluation
Thanks to Allegro, we now have the [KLEJ benchmark](https://klejbenchmark.com/leaderboard/), a set of nine evaluation tasks for the Polish language understanding. The following results are achieved by running standard set of evaluation scripts (no tricks!) utilizing both cased and uncased variants of Polbert.
| Model | Average | NKJP-NER | CDSC-E | CDSC-R | CBD | PolEmo2.0-IN | PolEmo2.0-OUT | DYK | PSC | AR |
| ------------- |--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
| Polbert cased | 81.7 | 93.6 | 93.4 | 93.8 | 52.7 | 87.4 | 71.1 | 59.1 | 98.6 | 85.2 |
| Polbert uncased | 81.4 | 90.1 | 93.9 | 93.5 | 55.0 | 88.1 | 68.8 | 59.4 | 98.8 | 85.4 |
Note how the uncased model performs better than cased on some tasks? My guess this is because of the oversampling of Open Subtitles dataset and its similarity to data in some of these tasks. All these benchmark tasks are sequence classification, so the relative strength of the cased model is not so visible here.
## Bias
The data used to train the model is biased. It may reflect stereotypes related to gender, ethnicity etc. Please be careful when using the model for downstream task to consider these biases and mitigate them.
## Acknowledgements
* I'd like to express my gratitude to Google [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) for providing the free TPU credits - thank you!
* Also appreciate the help from Timo Möller from [deepset](https://deepset.ai) for sharing tips and scripts based on their experience training German BERT model.
* Big thanks to Allegro for releasing KLEJ Benchmark and specifically to Piotr Rybak for help with the evaluation and pointing out some issues with the tokenization.
* Finally, thanks to Rachel Thomas, Jeremy Howard and Sylvain Gugger from [fastai](https://www.fast.ai) for their NLP and Deep Learning courses!
## Author
Darek Kłeczek - contact me on Twitter [@dk21](https://twitter.com/dk21)
## References
* https://github.com/google-research/bert
* https://github.com/narusemotoki/srx_segmenter
* SRX rules file for sentence splitting in Polish, written by Marcin Miłkowski: https://raw.githubusercontent.com/languagetool-org/languagetool/master/languagetool-core/src/main/resources/org/languagetool/resource/segment.srx
* [KLEJ benchmark](https://klejbenchmark.com/leaderboard/)
|
{"language": "pl", "thumbnail": "https://raw.githubusercontent.com/kldarek/polbert/master/img/polbert.png"}
|
dkleczek/bert-base-polish-uncased-v1
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pl"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #pl #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Polbert - Polish BERT
=====================
Polish version of BERT language model is here! It is now available in two variants: cased and uncased, both can be downloaded and used via HuggingFace transformers library. I recommend using the cased model, more info on the differences and benchmark results below.
!PolBERT image
Cased and uncased variants
--------------------------
* I initially trained the uncased model, the corpus and training details are referenced below. Here are some issues I found after I published the uncased model:
+ Some Polish characters and accents are not tokenized correctly through the BERT tokenizer when applying lowercase. This doesn't impact sequence classification much, but may influence token classfication tasks significantly.
+ I noticed a lot of duplicates in the Open Subtitles dataset, which dominates the training corpus.
+ I didn't use Whole Word Masking.
* The cased model improves on the uncased model in the following ways:
+ All Polish characters and accents should now be tokenized correctly.
+ I removed duplicates from Open Subtitles dataset. The corpus is smaller, but more balanced now.
+ The model is trained with Whole Word Masking.
Pre-training corpora
--------------------
Below is the list of corpora used along with the output of 'wc' command (counting lines, words and characters). These corpora were divided into sentences with srxsegmenter (see references), concatenated and tokenized with HuggingFace BERT Tokenizer.
### Uncased
### Cased
Pre-training details
--------------------
### Uncased
* Polbert was trained with code provided in Google BERT's github repository (URL
* Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)
* Training set-up: in total 1 million training steps:
+ 100.000 steps - 128 sequence length, batch size 512, learning rate 1e-4 (10.000 steps warmup)
+ 800.000 steps - 128 sequence length, batch size 512, learning rate 5e-5
+ 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5
* The model was trained on a single Google Cloud TPU v3-8
### Cased
* Same approach as uncased model, with the following differences:
+ Whole Word Masking
* Training set-up:
+ 100.000 steps - 128 sequence length, batch size 2048, learning rate 1e-4 (10.000 steps warmup)
+ 100.000 steps - 128 sequence length, batch size 2048, learning rate 5e-5
+ 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5
Usage
-----
Polbert is released via HuggingFace Transformers library.
For an example use as language model, see this notebook file.
### Uncased
### Cased
See the next section for an example usage of Polbert in downstream tasks.
Evaluation
----------
Thanks to Allegro, we now have the KLEJ benchmark, a set of nine evaluation tasks for the Polish language understanding. The following results are achieved by running standard set of evaluation scripts (no tricks!) utilizing both cased and uncased variants of Polbert.
Note how the uncased model performs better than cased on some tasks? My guess this is because of the oversampling of Open Subtitles dataset and its similarity to data in some of these tasks. All these benchmark tasks are sequence classification, so the relative strength of the cased model is not so visible here.
Bias
----
The data used to train the model is biased. It may reflect stereotypes related to gender, ethnicity etc. Please be careful when using the model for downstream task to consider these biases and mitigate them.
Acknowledgements
----------------
* I'd like to express my gratitude to Google TensorFlow Research Cloud (TFRC) for providing the free TPU credits - thank you!
* Also appreciate the help from Timo Möller from deepset for sharing tips and scripts based on their experience training German BERT model.
* Big thanks to Allegro for releasing KLEJ Benchmark and specifically to Piotr Rybak for help with the evaluation and pointing out some issues with the tokenization.
* Finally, thanks to Rachel Thomas, Jeremy Howard and Sylvain Gugger from fastai for their NLP and Deep Learning courses!
Author
------
Darek Kłeczek - contact me on Twitter @dk21
References
----------
* URL
* URL
* SRX rules file for sentence splitting in Polish, written by Marcin Miłkowski: URL
* KLEJ benchmark
|
[
"### Uncased",
"### Cased\n\n\n\nPre-training details\n--------------------",
"### Uncased\n\n\n* Polbert was trained with code provided in Google BERT's github repository (URL\n* Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)\n* Training set-up: in total 1 million training steps:\n\t+ 100.000 steps - 128 sequence length, batch size 512, learning rate 1e-4 (10.000 steps warmup)\n\t+ 800.000 steps - 128 sequence length, batch size 512, learning rate 5e-5\n\t+ 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5\n* The model was trained on a single Google Cloud TPU v3-8",
"### Cased\n\n\n* Same approach as uncased model, with the following differences:\n\t+ Whole Word Masking\n* Training set-up:\n\t+ 100.000 steps - 128 sequence length, batch size 2048, learning rate 1e-4 (10.000 steps warmup)\n\t+ 100.000 steps - 128 sequence length, batch size 2048, learning rate 5e-5\n\t+ 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5\n\n\nUsage\n-----\n\n\nPolbert is released via HuggingFace Transformers library.\n\n\nFor an example use as language model, see this notebook file.",
"### Uncased",
"### Cased\n\n\nSee the next section for an example usage of Polbert in downstream tasks.\n\n\nEvaluation\n----------\n\n\nThanks to Allegro, we now have the KLEJ benchmark, a set of nine evaluation tasks for the Polish language understanding. The following results are achieved by running standard set of evaluation scripts (no tricks!) utilizing both cased and uncased variants of Polbert.\n\n\n\nNote how the uncased model performs better than cased on some tasks? My guess this is because of the oversampling of Open Subtitles dataset and its similarity to data in some of these tasks. All these benchmark tasks are sequence classification, so the relative strength of the cased model is not so visible here.\n\n\nBias\n----\n\n\nThe data used to train the model is biased. It may reflect stereotypes related to gender, ethnicity etc. Please be careful when using the model for downstream task to consider these biases and mitigate them.\n\n\nAcknowledgements\n----------------\n\n\n* I'd like to express my gratitude to Google TensorFlow Research Cloud (TFRC) for providing the free TPU credits - thank you!\n* Also appreciate the help from Timo Möller from deepset for sharing tips and scripts based on their experience training German BERT model.\n* Big thanks to Allegro for releasing KLEJ Benchmark and specifically to Piotr Rybak for help with the evaluation and pointing out some issues with the tokenization.\n* Finally, thanks to Rachel Thomas, Jeremy Howard and Sylvain Gugger from fastai for their NLP and Deep Learning courses!\n\n\nAuthor\n------\n\n\nDarek Kłeczek - contact me on Twitter @dk21\n\n\nReferences\n----------\n\n\n* URL\n* URL\n* SRX rules file for sentence splitting in Polish, written by Marcin Miłkowski: URL\n* KLEJ benchmark"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #pl #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Uncased",
"### Cased\n\n\n\nPre-training details\n--------------------",
"### Uncased\n\n\n* Polbert was trained with code provided in Google BERT's github repository (URL\n* Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)\n* Training set-up: in total 1 million training steps:\n\t+ 100.000 steps - 128 sequence length, batch size 512, learning rate 1e-4 (10.000 steps warmup)\n\t+ 800.000 steps - 128 sequence length, batch size 512, learning rate 5e-5\n\t+ 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5\n* The model was trained on a single Google Cloud TPU v3-8",
"### Cased\n\n\n* Same approach as uncased model, with the following differences:\n\t+ Whole Word Masking\n* Training set-up:\n\t+ 100.000 steps - 128 sequence length, batch size 2048, learning rate 1e-4 (10.000 steps warmup)\n\t+ 100.000 steps - 128 sequence length, batch size 2048, learning rate 5e-5\n\t+ 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5\n\n\nUsage\n-----\n\n\nPolbert is released via HuggingFace Transformers library.\n\n\nFor an example use as language model, see this notebook file.",
"### Uncased",
"### Cased\n\n\nSee the next section for an example usage of Polbert in downstream tasks.\n\n\nEvaluation\n----------\n\n\nThanks to Allegro, we now have the KLEJ benchmark, a set of nine evaluation tasks for the Polish language understanding. The following results are achieved by running standard set of evaluation scripts (no tricks!) utilizing both cased and uncased variants of Polbert.\n\n\n\nNote how the uncased model performs better than cased on some tasks? My guess this is because of the oversampling of Open Subtitles dataset and its similarity to data in some of these tasks. All these benchmark tasks are sequence classification, so the relative strength of the cased model is not so visible here.\n\n\nBias\n----\n\n\nThe data used to train the model is biased. It may reflect stereotypes related to gender, ethnicity etc. Please be careful when using the model for downstream task to consider these biases and mitigate them.\n\n\nAcknowledgements\n----------------\n\n\n* I'd like to express my gratitude to Google TensorFlow Research Cloud (TFRC) for providing the free TPU credits - thank you!\n* Also appreciate the help from Timo Möller from deepset for sharing tips and scripts based on their experience training German BERT model.\n* Big thanks to Allegro for releasing KLEJ Benchmark and specifically to Piotr Rybak for help with the evaluation and pointing out some issues with the tokenization.\n* Finally, thanks to Rachel Thomas, Jeremy Howard and Sylvain Gugger from fastai for their NLP and Deep Learning courses!\n\n\nAuthor\n------\n\n\nDarek Kłeczek - contact me on Twitter @dk21\n\n\nReferences\n----------\n\n\n* URL\n* URL\n* SRX rules file for sentence splitting in Polish, written by Marcin Miłkowski: URL\n* KLEJ benchmark"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# papuGaPT2-finetuned-wierszyki
This model is a fine-tuned version of [flax-community/papuGaPT2](https://huggingface.co/flax-community/papuGaPT2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 202 | 2.8122 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "papuGaPT2-finetuned-wierszyki", "results": []}]}
|
dkleczek/papuGaPT2-finetuned-wierszyki
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
papuGaPT2-finetuned-wierszyki
=============================
This model is a fine-tuned version of flax-community/papuGaPT2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.8122
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# papuGaPT2 - Polish GPT2 language model
[GPT2](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) was released in 2019 and surprised many with its text generation capability. However, up until very recently, we have not had a strong text generation model in Polish language, which limited the research opportunities for Polish NLP practitioners. With the release of this model, we hope to enable such research.
Our model follows the standard GPT2 architecture and training approach. We are using a causal language modeling (CLM) objective, which means that the model is trained to predict the next word (token) in a sequence of words (tokens).
## Datasets
We used the Polish subset of the [multilingual Oscar corpus](https://www.aclweb.org/anthology/2020.acl-main.156) to train the model in a self-supervised fashion.
```
from datasets import load_dataset
dataset = load_dataset('oscar', 'unshuffled_deduplicated_pl')
```
## Intended uses & limitations
The raw model can be used for text generation or fine-tuned for a downstream task. The model has been trained on data scraped from the web, and can generate text containing intense violence, sexual situations, coarse language and drug use. It also reflects the biases from the dataset (see below for more details). These limitations are likely to transfer to the fine-tuned models as well. At this stage, we do not recommend using the model beyond research.
## Bias Analysis
There are many sources of bias embedded in the model and we caution to be mindful of this while exploring the capabilities of this model. We have started a very basic analysis of bias that you can see in [this notebook](https://huggingface.co/flax-community/papuGaPT2/blob/main/papuGaPT2_bias_analysis.ipynb).
### Gender Bias
As an example, we generated 50 texts starting with prompts "She/He works as". The image below presents the resulting word clouds of female/male professions. The most salient terms for male professions are: teacher, sales representative, programmer. The most salient terms for female professions are: model, caregiver, receptionist, waitress.

### Ethnicity/Nationality/Gender Bias
We generated 1000 texts to assess bias across ethnicity, nationality and gender vectors. We created prompts with the following scheme:
* Person - in Polish this is a single word that differentiates both nationality/ethnicity and gender. We assessed the following 5 nationalities/ethnicities: German, Romani, Jewish, Ukrainian, Neutral. The neutral group used generic pronounts ("He/She").
* Topic - we used 5 different topics:
* random act: *entered home*
* said: *said*
* works as: *works as*
* intent: Polish *niech* which combined with *he* would roughly translate to *let him ...*
* define: *is*
Each combination of 5 nationalities x 2 genders x 5 topics had 20 generated texts.
We used a model trained on [Polish Hate Speech corpus](https://huggingface.co/datasets/hate_speech_pl) to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the nationality/ethnicity and gender from the generated text before running the hate speech detector.
The following tables and charts demonstrate the intensity of hate speech associated with the generated texts. There is a very clear effect where each of the ethnicities/nationalities score higher than the neutral baseline.

Looking at the gender dimension we see higher hate score associated with males vs. females.

We don't recommend using the GPT2 model beyond research unless a clear mitigation for the biases is provided.
## Training procedure
### Training scripts
We used the [causal language modeling script for Flax](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py). We would like to thank the authors of that script as it allowed us to complete this training in a very short time!
### Preprocessing and Training Details
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.
We have trained the model on a single TPUv3 VM, and due to unforeseen events the training run was split in 3 parts, each time resetting from the final checkpoint with a new optimizer state:
1. LR 1e-3, bs 64, linear schedule with warmup for 1000 steps, 10 epochs, stopped after 70,000 steps at eval loss 3.206 and perplexity 24.68
2. LR 3e-4, bs 64, linear schedule with warmup for 5000 steps, 7 epochs, stopped after 77,000 steps at eval loss 3.116 and perplexity 22.55
3. LR 2e-4, bs 64, linear schedule with warmup for 5000 steps, 3 epochs, stopped after 91,000 steps at eval loss 3.082 and perplexity 21.79
## Evaluation results
We trained the model on 95% of the dataset and evaluated both loss and perplexity on 5% of the dataset. The final checkpoint evaluation resulted in:
* Evaluation loss: 3.082
* Perplexity: 21.79
## How to use
You can use the model either directly for text generation (see example below), by extracting features, or for further fine-tuning. We have prepared a notebook with text generation examples [here](https://huggingface.co/flax-community/papuGaPT2/blob/main/papuGaPT2_text_generation.ipynb) including different decoding methods, bad words suppression, few- and zero-shot learning demonstrations.
### Text generation
Let's first start with the text-generation pipeline. When prompting for the best Polish poet, it comes up with a pretty reasonable text, highlighting one of the most famous Polish poets, Adam Mickiewicz.
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='flax-community/papuGaPT2')
set_seed(42)
generator('Największym polskim poetą był')
>>> [{'generated_text': 'Największym polskim poetą był Adam Mickiewicz - uważany za jednego z dwóch geniuszów języka polskiego. "Pan Tadeusz" był jednym z najpopularniejszych dzieł w historii Polski. W 1801 został wystawiony publicznie w Teatrze Wilama Horzycy. Pod jego'}]
```
The pipeline uses `model.generate()` method in the background. In [our notebook](https://huggingface.co/flax-community/papuGaPT2/blob/main/papuGaPT2_text_generation.ipynb) we demonstrate different decoding methods we can use with this method, including greedy search, beam search, sampling, temperature scaling, top-k and top-p sampling. As an example, the below snippet uses sampling among the 50 most probable tokens at each stage (top-k) and among the tokens that jointly represent 95% of the probability distribution (top-p). It also returns 3 output sequences.
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
model = AutoModelWithLMHead.from_pretrained('flax-community/papuGaPT2')
tokenizer = AutoTokenizer.from_pretrained('flax-community/papuGaPT2')
set_seed(42) # reproducibility
input_ids = tokenizer.encode('Największym polskim poetą był', return_tensors='pt')
sample_outputs = model.generate(
input_ids,
do_sample=True,
max_length=50,
top_k=50,
top_p=0.95,
num_return_sequences=3
)
print("Output:\
" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))
>>> Output:
>>> ----------------------------------------------------------------------------------------------------
>>> 0: Największym polskim poetą był Roman Ingarden. Na jego wiersze i piosenki oddziaływały jego zamiłowanie do przyrody i przyrody. Dlatego też jako poeta w czasie pracy nad utworami i wierszami z tych wierszy, a następnie z poezji własnej - pisał
>>> 1: Największym polskim poetą był Julian Przyboś, którego poematem „Wierszyki dla dzieci”.
>>> W okresie międzywojennym, pod hasłem „Papież i nie tylko” Polska, jak większość krajów europejskich, była państwem faszystowskim.
>>> Prócz
>>> 2: Największym polskim poetą był Bolesław Leśmian, który był jego tłumaczem, a jego poezja tłumaczyła na kilkanaście języków.
>>> W 1895 roku nakładem krakowskiego wydania "Scientio" ukazała się w języku polskim powieść W krainie kangurów
```
### Avoiding Bad Words
You may want to prevent certain words from occurring in the generated text. To avoid displaying really bad words in the notebook, let's pretend that we don't like certain types of music to be advertised by our model. The prompt says: *my favorite type of music is*.
```python
input_ids = tokenizer.encode('Mój ulubiony gatunek muzyki to', return_tensors='pt')
bad_words = [' disco', ' rock', ' pop', ' soul', ' reggae', ' hip-hop']
bad_word_ids = []
for bad_word in bad_words:
ids = tokenizer(bad_word).input_ids
bad_word_ids.append(ids)
sample_outputs = model.generate(
input_ids,
do_sample=True,
max_length=20,
top_k=50,
top_p=0.95,
num_return_sequences=5,
bad_words_ids=bad_word_ids
)
print("Output:\
" + 100 * '-')
for i, sample_output in enumerate(sample_outputs):
print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))
>>> Output:
>>> ----------------------------------------------------------------------------------------------------
>>> 0: Mój ulubiony gatunek muzyki to muzyka klasyczna. Nie wiem, czy to kwestia sposobu, w jaki gramy,
>>> 1: Mój ulubiony gatunek muzyki to reggea. Zachwycają mnie piosenki i piosenki muzyczne o ducho
>>> 2: Mój ulubiony gatunek muzyki to rockabilly, ale nie lubię też punka. Moim ulubionym gatunkiem
>>> 3: Mój ulubiony gatunek muzyki to rap, ale to raczej się nie zdarza w miejscach, gdzie nie chodzi
>>> 4: Mój ulubiony gatunek muzyki to metal aranżeje nie mam pojęcia co mam robić. Co roku,
```
Ok, it seems this worked: we can see *classical music, rap, metal* among the outputs. Interestingly, *reggae* found a way through via a misspelling *reggea*. Take it as a caution to be careful with curating your bad word lists!
### Few Shot Learning
Let's see now if our model is able to pick up training signal directly from a prompt, without any finetuning. This approach was made really popular with GPT3, and while our model is definitely less powerful, maybe it can still show some skills! If you'd like to explore this topic in more depth, check out [the following article](https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api) which we used as reference.
```python
prompt = """Tekst: "Nienawidzę smerfów!"
Sentyment: Negatywny
###
Tekst: "Jaki piękny dzień 👍"
Sentyment: Pozytywny
###
Tekst: "Jutro idę do kina"
Sentyment: Neutralny
###
Tekst: "Ten przepis jest świetny!"
Sentyment:"""
res = generator(prompt, max_length=85, temperature=0.5, end_sequence='###', return_full_text=False, num_return_sequences=5,)
for x in res:
print(res[i]['generated_text'].split(' ')[1])
>>> Pozytywny
>>> Pozytywny
>>> Pozytywny
>>> Pozytywny
>>> Pozytywny
```
It looks like our model is able to pick up some signal from the prompt. Be careful though, this capability is definitely not mature and may result in spurious or biased responses.
### Zero-Shot Inference
Large language models are known to store a lot of knowledge in its parameters. In the example below, we can see that our model has learned the date of an important event in Polish history, the battle of Grunwald.
```python
prompt = "Bitwa pod Grunwaldem miała miejsce w roku"
input_ids = tokenizer.encode(prompt, return_tensors='pt')
# activate beam search and early_stopping
beam_outputs = model.generate(
input_ids,
max_length=20,
num_beams=5,
early_stopping=True,
num_return_sequences=3
)
print("Output:\
" + 100 * '-')
for i, sample_output in enumerate(beam_outputs):
print("{}: {}".format(i, tokenizer.decode(sample_output, skip_special_tokens=True)))
>>> Output:
>>> ----------------------------------------------------------------------------------------------------
>>> 0: Bitwa pod Grunwaldem miała miejsce w roku 1410, kiedy to wojska polsko-litewskie pod
>>> 1: Bitwa pod Grunwaldem miała miejsce w roku 1410, kiedy to wojska polsko-litewskie pokona
>>> 2: Bitwa pod Grunwaldem miała miejsce w roku 1410, kiedy to wojska polsko-litewskie,
```
## BibTeX entry and citation info
```bibtex
@misc{papuGaPT2,
title={papuGaPT2 - Polish GPT2 language model},
url={https://huggingface.co/flax-community/papuGaPT2},
author={Wojczulis, Michał and Kłeczek, Dariusz},
year={2021}
}
```
|
{"language": "pl", "tags": ["text-generation"], "widget": [{"text": "Najsmaczniejszy polski owoc to"}]}
|
dkleczek/papuGaPT2
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"pl",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pl"
] |
TAGS
#transformers #pytorch #jax #tensorboard #gpt2 #text-generation #pl #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# papuGaPT2 - Polish GPT2 language model
GPT2 was released in 2019 and surprised many with its text generation capability. However, up until very recently, we have not had a strong text generation model in Polish language, which limited the research opportunities for Polish NLP practitioners. With the release of this model, we hope to enable such research.
Our model follows the standard GPT2 architecture and training approach. We are using a causal language modeling (CLM) objective, which means that the model is trained to predict the next word (token) in a sequence of words (tokens).
## Datasets
We used the Polish subset of the multilingual Oscar corpus to train the model in a self-supervised fashion.
## Intended uses & limitations
The raw model can be used for text generation or fine-tuned for a downstream task. The model has been trained on data scraped from the web, and can generate text containing intense violence, sexual situations, coarse language and drug use. It also reflects the biases from the dataset (see below for more details). These limitations are likely to transfer to the fine-tuned models as well. At this stage, we do not recommend using the model beyond research.
## Bias Analysis
There are many sources of bias embedded in the model and we caution to be mindful of this while exploring the capabilities of this model. We have started a very basic analysis of bias that you can see in this notebook.
### Gender Bias
As an example, we generated 50 texts starting with prompts "She/He works as". The image below presents the resulting word clouds of female/male professions. The most salient terms for male professions are: teacher, sales representative, programmer. The most salient terms for female professions are: model, caregiver, receptionist, waitress.
!gender bias
### Ethnicity/Nationality/Gender Bias
We generated 1000 texts to assess bias across ethnicity, nationality and gender vectors. We created prompts with the following scheme:
* Person - in Polish this is a single word that differentiates both nationality/ethnicity and gender. We assessed the following 5 nationalities/ethnicities: German, Romani, Jewish, Ukrainian, Neutral. The neutral group used generic pronounts ("He/She").
* Topic - we used 5 different topics:
* random act: *entered home*
* said: *said*
* works as: *works as*
* intent: Polish *niech* which combined with *he* would roughly translate to *let him ...*
* define: *is*
Each combination of 5 nationalities x 2 genders x 5 topics had 20 generated texts.
We used a model trained on Polish Hate Speech corpus to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the nationality/ethnicity and gender from the generated text before running the hate speech detector.
The following tables and charts demonstrate the intensity of hate speech associated with the generated texts. There is a very clear effect where each of the ethnicities/nationalities score higher than the neutral baseline.
!hate score by ethnicity
Looking at the gender dimension we see higher hate score associated with males vs. females.
!hate score by gender
We don't recommend using the GPT2 model beyond research unless a clear mitigation for the biases is provided.
## Training procedure
### Training scripts
We used the causal language modeling script for Flax. We would like to thank the authors of that script as it allowed us to complete this training in a very short time!
### Preprocessing and Training Details
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.
We have trained the model on a single TPUv3 VM, and due to unforeseen events the training run was split in 3 parts, each time resetting from the final checkpoint with a new optimizer state:
1. LR 1e-3, bs 64, linear schedule with warmup for 1000 steps, 10 epochs, stopped after 70,000 steps at eval loss 3.206 and perplexity 24.68
2. LR 3e-4, bs 64, linear schedule with warmup for 5000 steps, 7 epochs, stopped after 77,000 steps at eval loss 3.116 and perplexity 22.55
3. LR 2e-4, bs 64, linear schedule with warmup for 5000 steps, 3 epochs, stopped after 91,000 steps at eval loss 3.082 and perplexity 21.79
## Evaluation results
We trained the model on 95% of the dataset and evaluated both loss and perplexity on 5% of the dataset. The final checkpoint evaluation resulted in:
* Evaluation loss: 3.082
* Perplexity: 21.79
## How to use
You can use the model either directly for text generation (see example below), by extracting features, or for further fine-tuning. We have prepared a notebook with text generation examples here including different decoding methods, bad words suppression, few- and zero-shot learning demonstrations.
### Text generation
Let's first start with the text-generation pipeline. When prompting for the best Polish poet, it comes up with a pretty reasonable text, highlighting one of the most famous Polish poets, Adam Mickiewicz.
The pipeline uses 'model.generate()' method in the background. In our notebook we demonstrate different decoding methods we can use with this method, including greedy search, beam search, sampling, temperature scaling, top-k and top-p sampling. As an example, the below snippet uses sampling among the 50 most probable tokens at each stage (top-k) and among the tokens that jointly represent 95% of the probability distribution (top-p). It also returns 3 output sequences.
### Avoiding Bad Words
You may want to prevent certain words from occurring in the generated text. To avoid displaying really bad words in the notebook, let's pretend that we don't like certain types of music to be advertised by our model. The prompt says: *my favorite type of music is*.
Ok, it seems this worked: we can see *classical music, rap, metal* among the outputs. Interestingly, *reggae* found a way through via a misspelling *reggea*. Take it as a caution to be careful with curating your bad word lists!
### Few Shot Learning
Let's see now if our model is able to pick up training signal directly from a prompt, without any finetuning. This approach was made really popular with GPT3, and while our model is definitely less powerful, maybe it can still show some skills! If you'd like to explore this topic in more depth, check out the following article which we used as reference.
It looks like our model is able to pick up some signal from the prompt. Be careful though, this capability is definitely not mature and may result in spurious or biased responses.
### Zero-Shot Inference
Large language models are known to store a lot of knowledge in its parameters. In the example below, we can see that our model has learned the date of an important event in Polish history, the battle of Grunwald.
## BibTeX entry and citation info
|
[
"# papuGaPT2 - Polish GPT2 language model\nGPT2 was released in 2019 and surprised many with its text generation capability. However, up until very recently, we have not had a strong text generation model in Polish language, which limited the research opportunities for Polish NLP practitioners. With the release of this model, we hope to enable such research. \n\nOur model follows the standard GPT2 architecture and training approach. We are using a causal language modeling (CLM) objective, which means that the model is trained to predict the next word (token) in a sequence of words (tokens).",
"## Datasets\nWe used the Polish subset of the multilingual Oscar corpus to train the model in a self-supervised fashion.",
"## Intended uses & limitations\nThe raw model can be used for text generation or fine-tuned for a downstream task. The model has been trained on data scraped from the web, and can generate text containing intense violence, sexual situations, coarse language and drug use. It also reflects the biases from the dataset (see below for more details). These limitations are likely to transfer to the fine-tuned models as well. At this stage, we do not recommend using the model beyond research.",
"## Bias Analysis\nThere are many sources of bias embedded in the model and we caution to be mindful of this while exploring the capabilities of this model. We have started a very basic analysis of bias that you can see in this notebook.",
"### Gender Bias\nAs an example, we generated 50 texts starting with prompts \"She/He works as\". The image below presents the resulting word clouds of female/male professions. The most salient terms for male professions are: teacher, sales representative, programmer. The most salient terms for female professions are: model, caregiver, receptionist, waitress.\n\n!gender bias",
"### Ethnicity/Nationality/Gender Bias\nWe generated 1000 texts to assess bias across ethnicity, nationality and gender vectors. We created prompts with the following scheme: \n\n* Person - in Polish this is a single word that differentiates both nationality/ethnicity and gender. We assessed the following 5 nationalities/ethnicities: German, Romani, Jewish, Ukrainian, Neutral. The neutral group used generic pronounts (\"He/She\"). \n* Topic - we used 5 different topics: \n * random act: *entered home*\n * said: *said*\n * works as: *works as*\n * intent: Polish *niech* which combined with *he* would roughly translate to *let him ...*\n * define: *is*\n\nEach combination of 5 nationalities x 2 genders x 5 topics had 20 generated texts. \n\nWe used a model trained on Polish Hate Speech corpus to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the nationality/ethnicity and gender from the generated text before running the hate speech detector.\n \nThe following tables and charts demonstrate the intensity of hate speech associated with the generated texts. There is a very clear effect where each of the ethnicities/nationalities score higher than the neutral baseline. \n\n!hate score by ethnicity\n\nLooking at the gender dimension we see higher hate score associated with males vs. females. \n\n!hate score by gender\n\nWe don't recommend using the GPT2 model beyond research unless a clear mitigation for the biases is provided.",
"## Training procedure",
"### Training scripts\nWe used the causal language modeling script for Flax. We would like to thank the authors of that script as it allowed us to complete this training in a very short time!",
"### Preprocessing and Training Details\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.\n\nWe have trained the model on a single TPUv3 VM, and due to unforeseen events the training run was split in 3 parts, each time resetting from the final checkpoint with a new optimizer state: \n1. LR 1e-3, bs 64, linear schedule with warmup for 1000 steps, 10 epochs, stopped after 70,000 steps at eval loss 3.206 and perplexity 24.68\n2. LR 3e-4, bs 64, linear schedule with warmup for 5000 steps, 7 epochs, stopped after 77,000 steps at eval loss 3.116 and perplexity 22.55\n3. LR 2e-4, bs 64, linear schedule with warmup for 5000 steps, 3 epochs, stopped after 91,000 steps at eval loss 3.082 and perplexity 21.79",
"## Evaluation results\nWe trained the model on 95% of the dataset and evaluated both loss and perplexity on 5% of the dataset. The final checkpoint evaluation resulted in: \n* Evaluation loss: 3.082\n* Perplexity: 21.79",
"## How to use\nYou can use the model either directly for text generation (see example below), by extracting features, or for further fine-tuning. We have prepared a notebook with text generation examples here including different decoding methods, bad words suppression, few- and zero-shot learning demonstrations.",
"### Text generation\nLet's first start with the text-generation pipeline. When prompting for the best Polish poet, it comes up with a pretty reasonable text, highlighting one of the most famous Polish poets, Adam Mickiewicz.\n \n\n\nThe pipeline uses 'model.generate()' method in the background. In our notebook we demonstrate different decoding methods we can use with this method, including greedy search, beam search, sampling, temperature scaling, top-k and top-p sampling. As an example, the below snippet uses sampling among the 50 most probable tokens at each stage (top-k) and among the tokens that jointly represent 95% of the probability distribution (top-p). It also returns 3 output sequences.",
"### Avoiding Bad Words\nYou may want to prevent certain words from occurring in the generated text. To avoid displaying really bad words in the notebook, let's pretend that we don't like certain types of music to be advertised by our model. The prompt says: *my favorite type of music is*. \n\n\nOk, it seems this worked: we can see *classical music, rap, metal* among the outputs. Interestingly, *reggae* found a way through via a misspelling *reggea*. Take it as a caution to be careful with curating your bad word lists!",
"### Few Shot Learning\n\nLet's see now if our model is able to pick up training signal directly from a prompt, without any finetuning. This approach was made really popular with GPT3, and while our model is definitely less powerful, maybe it can still show some skills! If you'd like to explore this topic in more depth, check out the following article which we used as reference.\n\n\nIt looks like our model is able to pick up some signal from the prompt. Be careful though, this capability is definitely not mature and may result in spurious or biased responses.",
"### Zero-Shot Inference\n\nLarge language models are known to store a lot of knowledge in its parameters. In the example below, we can see that our model has learned the date of an important event in Polish history, the battle of Grunwald.",
"## BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #gpt2 #text-generation #pl #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# papuGaPT2 - Polish GPT2 language model\nGPT2 was released in 2019 and surprised many with its text generation capability. However, up until very recently, we have not had a strong text generation model in Polish language, which limited the research opportunities for Polish NLP practitioners. With the release of this model, we hope to enable such research. \n\nOur model follows the standard GPT2 architecture and training approach. We are using a causal language modeling (CLM) objective, which means that the model is trained to predict the next word (token) in a sequence of words (tokens).",
"## Datasets\nWe used the Polish subset of the multilingual Oscar corpus to train the model in a self-supervised fashion.",
"## Intended uses & limitations\nThe raw model can be used for text generation or fine-tuned for a downstream task. The model has been trained on data scraped from the web, and can generate text containing intense violence, sexual situations, coarse language and drug use. It also reflects the biases from the dataset (see below for more details). These limitations are likely to transfer to the fine-tuned models as well. At this stage, we do not recommend using the model beyond research.",
"## Bias Analysis\nThere are many sources of bias embedded in the model and we caution to be mindful of this while exploring the capabilities of this model. We have started a very basic analysis of bias that you can see in this notebook.",
"### Gender Bias\nAs an example, we generated 50 texts starting with prompts \"She/He works as\". The image below presents the resulting word clouds of female/male professions. The most salient terms for male professions are: teacher, sales representative, programmer. The most salient terms for female professions are: model, caregiver, receptionist, waitress.\n\n!gender bias",
"### Ethnicity/Nationality/Gender Bias\nWe generated 1000 texts to assess bias across ethnicity, nationality and gender vectors. We created prompts with the following scheme: \n\n* Person - in Polish this is a single word that differentiates both nationality/ethnicity and gender. We assessed the following 5 nationalities/ethnicities: German, Romani, Jewish, Ukrainian, Neutral. The neutral group used generic pronounts (\"He/She\"). \n* Topic - we used 5 different topics: \n * random act: *entered home*\n * said: *said*\n * works as: *works as*\n * intent: Polish *niech* which combined with *he* would roughly translate to *let him ...*\n * define: *is*\n\nEach combination of 5 nationalities x 2 genders x 5 topics had 20 generated texts. \n\nWe used a model trained on Polish Hate Speech corpus to obtain the probability that each generated text contains hate speech. To avoid leakage, we removed the first word identifying the nationality/ethnicity and gender from the generated text before running the hate speech detector.\n \nThe following tables and charts demonstrate the intensity of hate speech associated with the generated texts. There is a very clear effect where each of the ethnicities/nationalities score higher than the neutral baseline. \n\n!hate score by ethnicity\n\nLooking at the gender dimension we see higher hate score associated with males vs. females. \n\n!hate score by gender\n\nWe don't recommend using the GPT2 model beyond research unless a clear mitigation for the biases is provided.",
"## Training procedure",
"### Training scripts\nWe used the causal language modeling script for Flax. We would like to thank the authors of that script as it allowed us to complete this training in a very short time!",
"### Preprocessing and Training Details\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.\n\nWe have trained the model on a single TPUv3 VM, and due to unforeseen events the training run was split in 3 parts, each time resetting from the final checkpoint with a new optimizer state: \n1. LR 1e-3, bs 64, linear schedule with warmup for 1000 steps, 10 epochs, stopped after 70,000 steps at eval loss 3.206 and perplexity 24.68\n2. LR 3e-4, bs 64, linear schedule with warmup for 5000 steps, 7 epochs, stopped after 77,000 steps at eval loss 3.116 and perplexity 22.55\n3. LR 2e-4, bs 64, linear schedule with warmup for 5000 steps, 3 epochs, stopped after 91,000 steps at eval loss 3.082 and perplexity 21.79",
"## Evaluation results\nWe trained the model on 95% of the dataset and evaluated both loss and perplexity on 5% of the dataset. The final checkpoint evaluation resulted in: \n* Evaluation loss: 3.082\n* Perplexity: 21.79",
"## How to use\nYou can use the model either directly for text generation (see example below), by extracting features, or for further fine-tuning. We have prepared a notebook with text generation examples here including different decoding methods, bad words suppression, few- and zero-shot learning demonstrations.",
"### Text generation\nLet's first start with the text-generation pipeline. When prompting for the best Polish poet, it comes up with a pretty reasonable text, highlighting one of the most famous Polish poets, Adam Mickiewicz.\n \n\n\nThe pipeline uses 'model.generate()' method in the background. In our notebook we demonstrate different decoding methods we can use with this method, including greedy search, beam search, sampling, temperature scaling, top-k and top-p sampling. As an example, the below snippet uses sampling among the 50 most probable tokens at each stage (top-k) and among the tokens that jointly represent 95% of the probability distribution (top-p). It also returns 3 output sequences.",
"### Avoiding Bad Words\nYou may want to prevent certain words from occurring in the generated text. To avoid displaying really bad words in the notebook, let's pretend that we don't like certain types of music to be advertised by our model. The prompt says: *my favorite type of music is*. \n\n\nOk, it seems this worked: we can see *classical music, rap, metal* among the outputs. Interestingly, *reggae* found a way through via a misspelling *reggea*. Take it as a caution to be careful with curating your bad word lists!",
"### Few Shot Learning\n\nLet's see now if our model is able to pick up training signal directly from a prompt, without any finetuning. This approach was made really popular with GPT3, and while our model is definitely less powerful, maybe it can still show some skills! If you'd like to explore this topic in more depth, check out the following article which we used as reference.\n\n\nIt looks like our model is able to pick up some signal from the prompt. Be careful though, this capability is definitely not mature and may result in spurious or biased responses.",
"### Zero-Shot Inference\n\nLarge language models are known to store a lot of knowledge in its parameters. In the example below, we can see that our model has learned the date of an important event in Polish history, the battle of Grunwald.",
"## BibTeX entry and citation info"
] |
text-generation
|
transformers
|
# A certain person's AI
|
{"tags": ["conversational"]}
|
dkminer81/Tromm
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# A certain person's AI
|
[
"# A certain person's AI"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# A certain person's AI"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4171
- Wer: 0.3452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0054 | 4.0 | 500 | 1.5456 | 0.9005 |
| 0.8183 | 8.0 | 1000 | 0.4738 | 0.4839 |
| 0.3019 | 12.0 | 1500 | 0.4280 | 0.4047 |
| 0.1738 | 16.0 | 2000 | 0.4584 | 0.3738 |
| 0.1285 | 20.0 | 2500 | 0.4418 | 0.3593 |
| 0.1104 | 24.0 | 3000 | 0.4110 | 0.3479 |
| 0.0828 | 28.0 | 3500 | 0.4171 | 0.3452 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-demo-colab", "results": []}]}
|
dkssud/wav2vec2-base-demo-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-demo-colab
========================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4171
* Wer: 0.3452
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu102
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
# OpenVINO model bert-large-uncased-whole-word-masking-squad-int8-0001
This is a BERT-large model pre-trained on lower-cased English text using Whole-Word-Masking and fine-tuned on the SQuAD v1.1 training set. The model performs question answering for English language; the input is a concatenated premise and question for the premise, and the output is the location of the answer to the question inside the premise. For details about the original floating-point model, check out [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805).
The model has been further quantized to INT8 precision using quantization-aware fine-tuning with [NNCF](https://github.com/openvinotoolkit/nncf).
Model source: [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/bert-large-uncased-whole-word-masking-squad-int8-0001)
|
{}
|
dkurt/bert-large-uncased-whole-word-masking-squad-int8-0001
| null |
[
"transformers",
"bert",
"question-answering",
"arxiv:1810.04805",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1810.04805"
] |
[] |
TAGS
#transformers #bert #question-answering #arxiv-1810.04805 #endpoints_compatible #region-us
|
# OpenVINO model bert-large-uncased-whole-word-masking-squad-int8-0001
This is a BERT-large model pre-trained on lower-cased English text using Whole-Word-Masking and fine-tuned on the SQuAD v1.1 training set. The model performs question answering for English language; the input is a concatenated premise and question for the premise, and the output is the location of the answer to the question inside the premise. For details about the original floating-point model, check out BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
The model has been further quantized to INT8 precision using quantization-aware fine-tuning with NNCF.
Model source: Open Model Zoo
|
[
"# OpenVINO model bert-large-uncased-whole-word-masking-squad-int8-0001\n\nThis is a BERT-large model pre-trained on lower-cased English text using Whole-Word-Masking and fine-tuned on the SQuAD v1.1 training set. The model performs question answering for English language; the input is a concatenated premise and question for the premise, and the output is the location of the answer to the question inside the premise. For details about the original floating-point model, check out BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.\n\nThe model has been further quantized to INT8 precision using quantization-aware fine-tuning with NNCF.\n\nModel source: Open Model Zoo"
] |
[
"TAGS\n#transformers #bert #question-answering #arxiv-1810.04805 #endpoints_compatible #region-us \n",
"# OpenVINO model bert-large-uncased-whole-word-masking-squad-int8-0001\n\nThis is a BERT-large model pre-trained on lower-cased English text using Whole-Word-Masking and fine-tuned on the SQuAD v1.1 training set. The model performs question answering for English language; the input is a concatenated premise and question for the premise, and the output is the location of the answer to the question inside the premise. For details about the original floating-point model, check out BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.\n\nThe model has been further quantized to INT8 precision using quantization-aware fine-tuning with NNCF.\n\nModel source: Open Model Zoo"
] |
audio-classification
|
transformers
|
[anton-l/wav2vec2-base-ft-keyword-spotting](https://huggingface.co/anton-l/wav2vec2-base-ft-keyword-spotting) model quantized with [Optimum OpenVINO](https://github.com/dkurt/optimum-openvino/).
| Accuracy on eval (baseline) | Accuracy on eval (quantized) |
|-----------------------------|----------------------------------------|
| 0.9828 | 0.9553 (-0.0274) |
|
{}
|
dkurt/wav2vec2-base-ft-keyword-spotting-int8
| null |
[
"transformers",
"wav2vec2",
"audio-classification",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #wav2vec2 #audio-classification #endpoints_compatible #region-us
|
anton-l/wav2vec2-base-ft-keyword-spotting model quantized with Optimum OpenVINO.
|
[] |
[
"TAGS\n#transformers #wav2vec2 #audio-classification #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8436 | 1.0 | 250 | 0.3175 | 0.9105 | 0.9081 |
| 0.2492 | 2.0 | 500 | 0.2161 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.7.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.926, "name": "Accuracy"}, {"type": "f1", "value": 0.9261144741040841, "name": "F1"}]}]}]}
|
dmiller1/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2161
* Accuracy: 0.926
* F1: 0.9261
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.7.1
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.7.1\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.7.1\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
null |
transformers
|
NER Model of BERN2 system
|
{}
|
dmis-lab/bern2-ner
| null |
[
"transformers",
"pytorch",
"roberta",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #roberta #endpoints_compatible #region-us
|
NER Model of BERN2 system
|
[] |
[
"TAGS\n#transformers #pytorch #roberta #endpoints_compatible #region-us \n"
] |
question-answering
|
transformers
|
# Model Card for biobert-large-cased-v1.1-squad
# Model Details
## Model Description
More information needed
- **Developed by:** DMIS-lab (Data Mining and Information Systems Lab, Korea University)
- **Shared by [Optional]:** DMIS-lab (Data Mining and Information Systems Lab, Korea University)
- **Model type:** Question Answering
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Parent Model:** [gpt-neo-2.7B](https://huggingface.co/EleutherAI/gpt-neo-2.7B)
- **Resources for more information:**
- [GitHub Repo](https://github.com/jhyuklee/biobert)
- [Associated Paper](https://arxiv.org/abs/1901.08746)
# Uses
## Direct Use
This model can be used for the task of question answering.
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model creators note in the [associated paper](https://arxiv.org/pdf/1901.08746.pdf):
> We used the BERTBASE model pre-trained on English Wikipedia and BooksCorpus for 1M steps. BioBERT v1.0 (þ PubMed þ PMC) is the version of BioBERT (þ PubMed þ PMC) trained for 470 K steps. When using both the PubMed and PMC corpora, we found that 200K and 270K pre-training steps were optimal for PubMed and PMC, respectively. We also used the ablated versions of BioBERT v1.0, which were pre-trained on only PubMed for 200K steps (BioBERT v1.0 (þ PubMed)) and PMC for 270K steps (BioBERT v1.0 (þ PMC))
## Training Procedure
### Preprocessing
The model creators note in the [associated paper](https://arxiv.org/pdf/1901.08746.pdf):
> We pre-trained BioBERT using Naver Smart Machine Learning (NSML) (Sung et al., 2017), which is utilized for large-scale experiments that need to be run on several GPUs
### Speeds, Sizes, Times
The model creators note in the [associated paper](https://arxiv.org/pdf/1901.08746.pdf):
> The maximum sequence length was fixed to 512 and the mini-batch size was set to 192, resulting in 98 304 words per iteration.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Training**: Eight NVIDIA V100 (32GB) GPUs [ for training],
- **Fine-tuning:** a single NVIDIA Titan Xp (12GB) GPU to fine-tune BioBERT on each task
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
# Citation
**BibTeX:**
```bibtex
@misc{mesh-transformer-jax,
@article{lee2019biobert,
title={BioBERT: a pre-trained biomedical language representation model for biomedical text mining},
author={Lee, Jinhyuk and Yoon, Wonjin and Kim, Sungdong and Kim, Donghyeon and Kim, Sunkyu and So, Chan Ho and Kang, Jaewoo},
journal={arXiv preprint arXiv:1901.08746},
year={2019}
}
```
# Glossary [optional]
More information needed
# More Information [optional]
For help or issues using BioBERT, please submit a GitHub issue. Please contact Jinhyuk Lee(`lee.jnhk (at) gmail.com`), or Wonjin Yoon (`wonjin.info (at) gmail.com`) for communication related to BioBERT.
# Model Card Authors [optional]
DMIS-lab (Data Mining and Information Systems Lab, Korea University) in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("dmis-lab/biobert-large-cased-v1.1-squad")
model = AutoModelForQuestionAnswering.from_pretrained("dmis-lab/biobert-large-cased-v1.1-squad")
```
</details>
|
{"tags": ["question-answering", "bert"]}
|
dmis-lab/biobert-large-cased-v1.1-squad
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"arxiv:1901.08746",
"arxiv:1910.09700",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1901.08746",
"1910.09700"
] |
[] |
TAGS
#transformers #pytorch #jax #bert #question-answering #arxiv-1901.08746 #arxiv-1910.09700 #endpoints_compatible #has_space #region-us
|
# Model Card for biobert-large-cased-v1.1-squad
# Model Details
## Model Description
More information needed
- Developed by: DMIS-lab (Data Mining and Information Systems Lab, Korea University)
- Shared by [Optional]: DMIS-lab (Data Mining and Information Systems Lab, Korea University)
- Model type: Question Answering
- Language(s) (NLP): More information needed
- License: More information needed
- Parent Model: gpt-neo-2.7B
- Resources for more information:
- GitHub Repo
- Associated Paper
# Uses
## Direct Use
This model can be used for the task of question answering.
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model creators note in the associated paper:
> We used the BERTBASE model pre-trained on English Wikipedia and BooksCorpus for 1M steps. BioBERT v1.0 (þ PubMed þ PMC) is the version of BioBERT (þ PubMed þ PMC) trained for 470 K steps. When using both the PubMed and PMC corpora, we found that 200K and 270K pre-training steps were optimal for PubMed and PMC, respectively. We also used the ablated versions of BioBERT v1.0, which were pre-trained on only PubMed for 200K steps (BioBERT v1.0 (þ PubMed)) and PMC for 270K steps (BioBERT v1.0 (þ PMC))
## Training Procedure
### Preprocessing
The model creators note in the associated paper:
> We pre-trained BioBERT using Naver Smart Machine Learning (NSML) (Sung et al., 2017), which is utilized for large-scale experiments that need to be run on several GPUs
### Speeds, Sizes, Times
The model creators note in the associated paper:
> The maximum sequence length was fixed to 512 and the mini-batch size was set to 192, resulting in 98 304 words per iteration.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed
- Training: Eight NVIDIA V100 (32GB) GPUs [ for training],
- Fine-tuning: a single NVIDIA Titan Xp (12GB) GPU to fine-tune BioBERT on each task
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
BibTeX:
# Glossary [optional]
More information needed
# More Information [optional]
For help or issues using BioBERT, please submit a GitHub issue. Please contact Jinhyuk Lee('URL (at) URL'), or Wonjin Yoon ('URL (at) URL') for communication related to BioBERT.
# Model Card Authors [optional]
DMIS-lab (Data Mining and Information Systems Lab, Korea University) in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
</details>
|
[
"# Model Card for biobert-large-cased-v1.1-squad",
"# Model Details",
"## Model Description\n \nMore information needed\n \n- Developed by: DMIS-lab (Data Mining and Information Systems Lab, Korea University)\n- Shared by [Optional]: DMIS-lab (Data Mining and Information Systems Lab, Korea University)\n\n- Model type: Question Answering\n- Language(s) (NLP): More information needed\n- License: More information needed\n- Parent Model: gpt-neo-2.7B\n- Resources for more information:\n \t- GitHub Repo\n \t - Associated Paper",
"# Uses",
"## Direct Use\nThis model can be used for the task of question answering.",
"## Downstream Use [Optional]\n \nMore information needed.",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\n \nThe model creators note in the associated paper:\n> We used the BERTBASE model pre-trained on English Wikipedia and BooksCorpus for 1M steps. BioBERT v1.0 (þ PubMed þ PMC) is the version of BioBERT (þ PubMed þ PMC) trained for 470 K steps. When using both the PubMed and PMC corpora, we found that 200K and 270K pre-training steps were optimal for PubMed and PMC, respectively. We also used the ablated versions of BioBERT v1.0, which were pre-trained on only PubMed for 200K steps (BioBERT v1.0 (þ PubMed)) and PMC for 270K steps (BioBERT v1.0 (þ PMC))",
"## Training Procedure",
"### Preprocessing\n \n The model creators note in the associated paper:\n> We pre-trained BioBERT using Naver Smart Machine Learning (NSML) (Sung et al., 2017), which is utilized for large-scale experiments that need to be run on several GPUs",
"### Speeds, Sizes, Times\n \n The model creators note in the associated paper:\n> The maximum sequence length was fixed to 512 and the mini-batch size was set to 192, resulting in 98 304 words per iteration.",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nMore information needed",
"### Factors\nMore information needed",
"### Metrics\n \nMore information needed",
"## Results \n \nMore information needed",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n\t- Training: Eight NVIDIA V100 (32GB) GPUs [ for training], \n - Fine-tuning: a single NVIDIA Titan Xp (12GB) GPU to fine-tune BioBERT on each task\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \n \nMore information needed",
"### Software\n \nMore information needed.\n \nBibTeX:",
"# Glossary [optional]\n \nMore information needed",
"# More Information [optional]\n \nFor help or issues using BioBERT, please submit a GitHub issue. Please contact Jinhyuk Lee('URL (at) URL'), or Wonjin Yoon ('URL (at) URL') for communication related to BioBERT.",
"# Model Card Authors [optional]\n \n DMIS-lab (Data Mining and Information Systems Lab, Korea University) in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #question-answering #arxiv-1901.08746 #arxiv-1910.09700 #endpoints_compatible #has_space #region-us \n",
"# Model Card for biobert-large-cased-v1.1-squad",
"# Model Details",
"## Model Description\n \nMore information needed\n \n- Developed by: DMIS-lab (Data Mining and Information Systems Lab, Korea University)\n- Shared by [Optional]: DMIS-lab (Data Mining and Information Systems Lab, Korea University)\n\n- Model type: Question Answering\n- Language(s) (NLP): More information needed\n- License: More information needed\n- Parent Model: gpt-neo-2.7B\n- Resources for more information:\n \t- GitHub Repo\n \t - Associated Paper",
"# Uses",
"## Direct Use\nThis model can be used for the task of question answering.",
"## Downstream Use [Optional]\n \nMore information needed.",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\n \nThe model creators note in the associated paper:\n> We used the BERTBASE model pre-trained on English Wikipedia and BooksCorpus for 1M steps. BioBERT v1.0 (þ PubMed þ PMC) is the version of BioBERT (þ PubMed þ PMC) trained for 470 K steps. When using both the PubMed and PMC corpora, we found that 200K and 270K pre-training steps were optimal for PubMed and PMC, respectively. We also used the ablated versions of BioBERT v1.0, which were pre-trained on only PubMed for 200K steps (BioBERT v1.0 (þ PubMed)) and PMC for 270K steps (BioBERT v1.0 (þ PMC))",
"## Training Procedure",
"### Preprocessing\n \n The model creators note in the associated paper:\n> We pre-trained BioBERT using Naver Smart Machine Learning (NSML) (Sung et al., 2017), which is utilized for large-scale experiments that need to be run on several GPUs",
"### Speeds, Sizes, Times\n \n The model creators note in the associated paper:\n> The maximum sequence length was fixed to 512 and the mini-batch size was set to 192, resulting in 98 304 words per iteration.",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nMore information needed",
"### Factors\nMore information needed",
"### Metrics\n \nMore information needed",
"## Results \n \nMore information needed",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n\t- Training: Eight NVIDIA V100 (32GB) GPUs [ for training], \n - Fine-tuning: a single NVIDIA Titan Xp (12GB) GPU to fine-tune BioBERT on each task\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \n \nMore information needed",
"### Software\n \nMore information needed.\n \nBibTeX:",
"# Glossary [optional]\n \nMore information needed",
"# More Information [optional]\n \nFor help or issues using BioBERT, please submit a GitHub issue. Please contact Jinhyuk Lee('URL (at) URL'), or Wonjin Yoon ('URL (at) URL') for communication related to BioBERT.",
"# Model Card Authors [optional]\n \n DMIS-lab (Data Mining and Information Systems Lab, Korea University) in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>"
] |
feature-extraction
|
transformers
|
hello
|
{}
|
dmis-lab/biosyn-biobert-bc2gn
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us
|
hello
|
[] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us \n"
] |
feature-extraction
|
transformers
|
hello
|
{}
|
dmis-lab/biosyn-sapbert-bc2gn
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us
|
hello
|
[] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us \n"
] |
feature-extraction
|
transformers
|
# Model Card for biosyn-sapbert-ncbi-disease
# Model Details
## Model Description
More information needed
- **Developed by:** Dmis-lab (Data Mining and Information Systems Lab, Korea University)
- **Shared by [Optional]:** Hugging Face
- **Model type:** Feature Extraction
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Related Models:**
- **Parent Model:** BERT
- **Resources for more information:**
- [GitHub Repo](https://github.com/jhyuklee/biobert)
- [Associated Paper](https://arxiv.org/abs/1901.08746)
# Uses
## Direct Use
This model can be used for the task of Feature Extraction
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model creators note in the [associated paper](https://arxiv.org/pdf/1901.08746.pdf)
> We used the BERTBASE model pre-trained on English Wikipedia and BooksCorpus for 1M steps. BioBERT v1.0 (þ PubMed þ PMC) is the version of BioBERT (þ PubMed þ PMC) trained for 470 K steps. When using both the PubMed and PMC corpora, we found that 200K and 270K pre-training steps were optimal for PubMed and PMC, respectively. We also used the ablated versions of BioBERT v1.0, which were pre-trained on only PubMed for 200K steps (BioBERT v1.0 (þ PubMed)) and PMC for 270K steps (BioBERT v1.0 (þ PMC))
## Training Procedure
### Preprocessing
The model creators note in the [associated paper](https://arxiv.org/pdf/1901.08746.pdf)
> We pre-trained BioBERT using Naver Smart Machine Learning (NSML) (Sung et al., 2017), which is utilized for large-scale experiments that need to be run on several GPUs
### Speeds, Sizes, Times
The model creators note in the [associated paper](https://arxiv.org/pdf/1901.08746.pdf)
> The maximum sequence length was fixed to 512 and the mini-batch size was set to 192, resulting in 98 304 words per iteration.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:**
- **Training:** Eight NVIDIA V100 (32GB) GPUs [ for training],
- **Fine-tuning:** a single NVIDIA Titan Xp (12GB) GPU to fine-tune BioBERT on each task
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```
@article{lee2019biobert,
title={BioBERT: a pre-trained biomedical language representation model for biomedical text mining},
author={Lee, Jinhyuk and Yoon, Wonjin and Kim, Sungdong and Kim, Donghyeon and Kim, Sunkyu and So, Chan Ho and Kang, Jaewoo},
journal={arXiv preprint arXiv:1901.08746},
year={2019}
}
```
# Glossary [optional]
More information needed
# More Information [optional]
For help or issues using BioBERT, please submit a GitHub issue. Please contact Jinhyuk Lee(`lee.jnhk (at) gmail.com`), or Wonjin Yoon (`wonjin.info (at) gmail.com`) for communication related to BioBERT.
# Model Card Authors [optional]
Dmis-lab (Data Mining and Information Systems Lab, Korea University) in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("dmis-lab/biosyn-sapbert-ncbi-disease")
model = AutoModel.from_pretrained("dmis-lab/biosyn-sapbert-ncbi-disease")
```
</details>
|
{"tags": ["bert"]}
|
dmis-lab/biosyn-sapbert-ncbi-disease
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:1901.08746",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1901.08746",
"1910.09700"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-1901.08746 #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for biosyn-sapbert-ncbi-disease
# Model Details
## Model Description
More information needed
- Developed by: Dmis-lab (Data Mining and Information Systems Lab, Korea University)
- Shared by [Optional]: Hugging Face
- Model type: Feature Extraction
- Language(s) (NLP): More information needed
- License: More information needed
- Related Models:
- Parent Model: BERT
- Resources for more information:
- GitHub Repo
- Associated Paper
# Uses
## Direct Use
This model can be used for the task of Feature Extraction
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model creators note in the associated paper
> We used the BERTBASE model pre-trained on English Wikipedia and BooksCorpus for 1M steps. BioBERT v1.0 (þ PubMed þ PMC) is the version of BioBERT (þ PubMed þ PMC) trained for 470 K steps. When using both the PubMed and PMC corpora, we found that 200K and 270K pre-training steps were optimal for PubMed and PMC, respectively. We also used the ablated versions of BioBERT v1.0, which were pre-trained on only PubMed for 200K steps (BioBERT v1.0 (þ PubMed)) and PMC for 270K steps (BioBERT v1.0 (þ PMC))
## Training Procedure
### Preprocessing
The model creators note in the associated paper
> We pre-trained BioBERT using Naver Smart Machine Learning (NSML) (Sung et al., 2017), which is utilized for large-scale experiments that need to be run on several GPUs
### Speeds, Sizes, Times
The model creators note in the associated paper
> The maximum sequence length was fixed to 512 and the mini-batch size was set to 192, resulting in 98 304 words per iteration.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Training: Eight NVIDIA V100 (32GB) GPUs [ for training],
- Fine-tuning: a single NVIDIA Titan Xp (12GB) GPU to fine-tune BioBERT on each task
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
BibTeX:
# Glossary [optional]
More information needed
# More Information [optional]
For help or issues using BioBERT, please submit a GitHub issue. Please contact Jinhyuk Lee('URL (at) URL'), or Wonjin Yoon ('URL (at) URL') for communication related to BioBERT.
# Model Card Authors [optional]
Dmis-lab (Data Mining and Information Systems Lab, Korea University) in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
</details>
|
[
"# Model Card for biosyn-sapbert-ncbi-disease",
"# Model Details",
"## Model Description\n \nMore information needed\n \n- Developed by: Dmis-lab (Data Mining and Information Systems Lab, Korea University)\n- Shared by [Optional]: Hugging Face\n- Model type: Feature Extraction\n- Language(s) (NLP): More information needed\n- License: More information needed\n- Related Models: \n - Parent Model: BERT\n- Resources for more information: \n - GitHub Repo\n - Associated Paper",
"# Uses",
"## Direct Use\n \nThis model can be used for the task of Feature Extraction",
"## Downstream Use [Optional]\n \nMore information needed",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\nThe model creators note in the associated paper\n> We used the BERTBASE model pre-trained on English Wikipedia and BooksCorpus for 1M steps. BioBERT v1.0 (þ PubMed þ PMC) is the version of BioBERT (þ PubMed þ PMC) trained for 470 K steps. When using both the PubMed and PMC corpora, we found that 200K and 270K pre-training steps were optimal for PubMed and PMC, respectively. We also used the ablated versions of BioBERT v1.0, which were pre-trained on only PubMed for 200K steps (BioBERT v1.0 (þ PubMed)) and PMC for 270K steps (BioBERT v1.0 (þ PMC))",
"## Training Procedure",
"### Preprocessing\n The model creators note in the associated paper\n> We pre-trained BioBERT using Naver Smart Machine Learning (NSML) (Sung et al., 2017), which is utilized for large-scale experiments that need to be run on several GPUs",
"### Speeds, Sizes, Times\n The model creators note in the associated paper\n> The maximum sequence length was fixed to 512 and the mini-batch size was set to 192, resulting in 98 304 words per iteration.",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nMore information needed",
"### Factors\n \nMore information needed",
"### Metrics\n \n \n \nMore information needed",
"## Results \nMore information needed",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: \n- Training: Eight NVIDIA V100 (32GB) GPUs [ for training], \n- Fine-tuning: a single NVIDIA Titan Xp (12GB) GPU to fine-tune BioBERT on each task\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \nMore information needed",
"### Software\n \nMore information needed\n \nBibTeX:",
"# Glossary [optional]\n \nMore information needed",
"# More Information [optional]\n \nFor help or issues using BioBERT, please submit a GitHub issue. Please contact Jinhyuk Lee('URL (at) URL'), or Wonjin Yoon ('URL (at) URL') for communication related to BioBERT.",
"# Model Card Authors [optional]\n \n \n Dmis-lab (Data Mining and Information Systems Lab, Korea University) in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>"
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-1901.08746 #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for biosyn-sapbert-ncbi-disease",
"# Model Details",
"## Model Description\n \nMore information needed\n \n- Developed by: Dmis-lab (Data Mining and Information Systems Lab, Korea University)\n- Shared by [Optional]: Hugging Face\n- Model type: Feature Extraction\n- Language(s) (NLP): More information needed\n- License: More information needed\n- Related Models: \n - Parent Model: BERT\n- Resources for more information: \n - GitHub Repo\n - Associated Paper",
"# Uses",
"## Direct Use\n \nThis model can be used for the task of Feature Extraction",
"## Downstream Use [Optional]\n \nMore information needed",
"## Out-of-Scope Use\n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"# Training Details",
"## Training Data\nThe model creators note in the associated paper\n> We used the BERTBASE model pre-trained on English Wikipedia and BooksCorpus for 1M steps. BioBERT v1.0 (þ PubMed þ PMC) is the version of BioBERT (þ PubMed þ PMC) trained for 470 K steps. When using both the PubMed and PMC corpora, we found that 200K and 270K pre-training steps were optimal for PubMed and PMC, respectively. We also used the ablated versions of BioBERT v1.0, which were pre-trained on only PubMed for 200K steps (BioBERT v1.0 (þ PubMed)) and PMC for 270K steps (BioBERT v1.0 (þ PMC))",
"## Training Procedure",
"### Preprocessing\n The model creators note in the associated paper\n> We pre-trained BioBERT using Naver Smart Machine Learning (NSML) (Sung et al., 2017), which is utilized for large-scale experiments that need to be run on several GPUs",
"### Speeds, Sizes, Times\n The model creators note in the associated paper\n> The maximum sequence length was fixed to 512 and the mini-batch size was set to 192, resulting in 98 304 words per iteration.",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data\n \nMore information needed",
"### Factors\n \nMore information needed",
"### Metrics\n \n \n \nMore information needed",
"## Results \nMore information needed",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: \n- Training: Eight NVIDIA V100 (32GB) GPUs [ for training], \n- Fine-tuning: a single NVIDIA Titan Xp (12GB) GPU to fine-tune BioBERT on each task\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \nMore information needed",
"### Software\n \nMore information needed\n \nBibTeX:",
"# Glossary [optional]\n \nMore information needed",
"# More Information [optional]\n \nFor help or issues using BioBERT, please submit a GitHub issue. Please contact Jinhyuk Lee('URL (at) URL'), or Wonjin Yoon ('URL (at) URL') for communication related to BioBERT.",
"# Model Card Authors [optional]\n \n \n Dmis-lab (Data Mining and Information Systems Lab, Korea University) in collaboration with Ezi Ozoani and the Hugging Face team",
"# Model Card Contact\n \nMore information needed",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n</details>"
] |
summarization
|
transformers
|
# rubert_ria_headlines
## Description
*bert2bert* model, initialized with the `DeepPavlov/rubert-base-cased` pretrained weights and
fine-tuned on the first 99% of ["Rossiya Segodnya" news dataset](https://github.com/RossiyaSegodnya/ria_news_dataset) for 2 epochs.
## Usage example
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
MODEL_NAME = "dmitry-vorobiev/rubert_ria_headlines"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME)
text = "Скопируйте текст статьи / новости"
encoded_batch = tokenizer.prepare_seq2seq_batch(
[text],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=512)
output_ids = model.generate(
input_ids=encoded_batch["input_ids"],
max_length=36,
no_repeat_ngram_size=3,
num_beams=5,
top_k=0
)
headline = tokenizer.decode(output_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=False)
print(headline)
```
## Datasets
- [ria_news](https://github.com/RossiyaSegodnya/ria_news_dataset)
## How it was trained?
I used free TPUv3 on kaggle. The model was trained for 3 epochs with effective batch size 192 and soft restarts (warmup steps 1500 / 500 / 500 with new optimizer state on each epoch start).
- [1 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53254694)
- [2 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53269040)
- [3 epoch notebook](https://www.kaggle.com/dvorobiev/try-train-seq2seq-ria-tpu?scriptVersionId=53280797)
Common train params:
```shell
export XLA_USE_BF16=1
export XLA_TENSOR_ALLOCATOR_MAXSIZE=100000000
python nlp_headline_rus/src/train_seq2seq.py \
--do_train \
--tie_encoder_decoder \
--max_source_length 512 \
--max_target_length 32 \
--val_max_target_length 48 \
--tpu_num_cores 8 \
--per_device_train_batch_size 24 \
--gradient_accumulation_steps 1 \
--learning_rate 5e-4 \
--adam_epsilon 1e-6 \
--weight_decay 1e-5 \
```
## Validation results
- Using [last 1% of ria](https://drive.google.com/drive/folders/1ztAeyb1BiLMgXwOgOJS7WMR4PGiI1q92) dataset
- Using [gazeta_ru test](https://drive.google.com/drive/folders/1CyowuRpecsLTcDbqEfmAvkCWOod58g_e) split
- Using [gazeta_ru val](https://drive.google.com/drive/folders/1XZFOXHSXLKdhzm61ceVLw3aautrdskIu) split
|
{"language": ["ru"], "license": "mit", "tags": ["summarization", "bert", "rubert"]}
|
dmitry-vorobiev/rubert_ria_headlines
| null |
[
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"bert",
"rubert",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ru"
] |
TAGS
#transformers #pytorch #safetensors #encoder-decoder #text2text-generation #summarization #bert #rubert #ru #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# rubert_ria_headlines
## Description
*bert2bert* model, initialized with the 'DeepPavlov/rubert-base-cased' pretrained weights and
fine-tuned on the first 99% of "Rossiya Segodnya" news dataset for 2 epochs.
## Usage example
## Datasets
- ria_news
## How it was trained?
I used free TPUv3 on kaggle. The model was trained for 3 epochs with effective batch size 192 and soft restarts (warmup steps 1500 / 500 / 500 with new optimizer state on each epoch start).
- 1 epoch notebook
- 2 epoch notebook
- 3 epoch notebook
Common train params:
## Validation results
- Using last 1% of ria dataset
- Using gazeta_ru test split
- Using gazeta_ru val split
|
[
"# rubert_ria_headlines",
"## Description\n*bert2bert* model, initialized with the 'DeepPavlov/rubert-base-cased' pretrained weights and \n fine-tuned on the first 99% of \"Rossiya Segodnya\" news dataset for 2 epochs.",
"## Usage example",
"## Datasets\n- ria_news",
"## How it was trained?\n\nI used free TPUv3 on kaggle. The model was trained for 3 epochs with effective batch size 192 and soft restarts (warmup steps 1500 / 500 / 500 with new optimizer state on each epoch start).\n\n- 1 epoch notebook\n- 2 epoch notebook\n- 3 epoch notebook\n\nCommon train params:",
"## Validation results\n\n- Using last 1% of ria dataset\n- Using gazeta_ru test split\n- Using gazeta_ru val split"
] |
[
"TAGS\n#transformers #pytorch #safetensors #encoder-decoder #text2text-generation #summarization #bert #rubert #ru #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# rubert_ria_headlines",
"## Description\n*bert2bert* model, initialized with the 'DeepPavlov/rubert-base-cased' pretrained weights and \n fine-tuned on the first 99% of \"Rossiya Segodnya\" news dataset for 2 epochs.",
"## Usage example",
"## Datasets\n- ria_news",
"## How it was trained?\n\nI used free TPUv3 on kaggle. The model was trained for 3 epochs with effective batch size 192 and soft restarts (warmup steps 1500 / 500 / 500 with new optimizer state on each epoch start).\n\n- 1 epoch notebook\n- 2 epoch notebook\n- 3 epoch notebook\n\nCommon train params:",
"## Validation results\n\n- Using last 1% of ria dataset\n- Using gazeta_ru test split\n- Using gazeta_ru val split"
] |
text2text-generation
|
transformers
|
# doc2query/S2ORC-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/S2ORC-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 156k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, abstract) pairs from [S2ORC](https://github.com/allenai/s2orc).
|
{"language": "en", "license": "apache-2.0", "datasets": ["S2ORC"], "widget": [{"text": "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."}]}
|
doc2query/S2ORC-t5-base-v1
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:S2ORC",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.08375",
"2104.08663"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #dataset-S2ORC #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# doc2query/S2ORC-t5-base-v1
This is a doc2query model based on T5 (also known as docT5query).
It can be used for:
- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.
- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
Note: 'model.generate()' is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned google/t5-v1_1-base for 156k training steps. For the training script, see the 'train_script.py' in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, abstract) pairs from S2ORC.
|
[
"# doc2query/S2ORC-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 156k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, abstract) pairs from S2ORC."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #dataset-S2ORC #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# doc2query/S2ORC-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 156k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, abstract) pairs from S2ORC."
] |
text2text-generation
|
transformers
|
# doc2query/all-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/all-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=384, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 570k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a large collection of datasets. For the exact datasets names and weights see the `data_config.json` in this repository. Most of the datasets are available at [https://huggingface.co/sentence-transformers](https://huggingface.co/sentence-transformers).
The datasets include besides others:
- (title, body) pairs from [Reddit](https://huggingface.co/datasets/sentence-transformers/reddit-title-body)
- (title, body) pairs and (title, answer) pairs from StackExchange and Yahoo Answers!
- (title, review) pairs from Amazon reviews
- (query, paragraph) pairs from MS MARCO, NQ, and GooAQ
- (question, duplicate_question) from Quora and WikiAnswers
- (title, abstract) pairs from S2ORC
## Prefix
This model was trained **without a prefix**. In contrast to [doc2query/all-with_prefix-t5-base-v1](https://huggingface.co/doc2query/all-with_prefix-t5-base-v1) you cannot specify what type of transformation (answer2question, review2title) etc. you will have. This can lead to a mixture of output values.
|
{"language": "en", "license": "apache-2.0", "datasets": ["sentence-transformers/reddit-title-body", "sentence-transformers/embedding-training-data"], "widget": [{"text": "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."}]}
|
doc2query/all-t5-base-v1
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:sentence-transformers/reddit-title-body",
"dataset:sentence-transformers/embedding-training-data",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.08375",
"2104.08663"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #dataset-sentence-transformers/reddit-title-body #dataset-sentence-transformers/embedding-training-data #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# doc2query/all-t5-base-v1
This is a doc2query model based on T5 (also known as docT5query).
It can be used for:
- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.
- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
Note: 'model.generate()' is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned google/t5-v1_1-base for 570k training steps. For the training script, see the 'train_script.py' in this repository.
The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a large collection of datasets. For the exact datasets names and weights see the 'data_config.json' in this repository. Most of the datasets are available at URL
The datasets include besides others:
- (title, body) pairs from Reddit
- (title, body) pairs and (title, answer) pairs from StackExchange and Yahoo Answers!
- (title, review) pairs from Amazon reviews
- (query, paragraph) pairs from MS MARCO, NQ, and GooAQ
- (question, duplicate_question) from Quora and WikiAnswers
- (title, abstract) pairs from S2ORC
## Prefix
This model was trained without a prefix. In contrast to doc2query/all-with_prefix-t5-base-v1 you cannot specify what type of transformation (answer2question, review2title) etc. you will have. This can lead to a mixture of output values.
|
[
"# doc2query/all-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 570k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a large collection of datasets. For the exact datasets names and weights see the 'data_config.json' in this repository. Most of the datasets are available at URL\r\n\r\nThe datasets include besides others:\r\n- (title, body) pairs from Reddit\r\n- (title, body) pairs and (title, answer) pairs from StackExchange and Yahoo Answers!\r\n- (title, review) pairs from Amazon reviews\r\n- (query, paragraph) pairs from MS MARCO, NQ, and GooAQ \r\n- (question, duplicate_question) from Quora and WikiAnswers\r\n- (title, abstract) pairs from S2ORC",
"## Prefix\r\n\r\nThis model was trained without a prefix. In contrast to doc2query/all-with_prefix-t5-base-v1 you cannot specify what type of transformation (answer2question, review2title) etc. you will have. This can lead to a mixture of output values."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #dataset-sentence-transformers/reddit-title-body #dataset-sentence-transformers/embedding-training-data #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# doc2query/all-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 570k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a large collection of datasets. For the exact datasets names and weights see the 'data_config.json' in this repository. Most of the datasets are available at URL\r\n\r\nThe datasets include besides others:\r\n- (title, body) pairs from Reddit\r\n- (title, body) pairs and (title, answer) pairs from StackExchange and Yahoo Answers!\r\n- (title, review) pairs from Amazon reviews\r\n- (query, paragraph) pairs from MS MARCO, NQ, and GooAQ \r\n- (question, duplicate_question) from Quora and WikiAnswers\r\n- (title, abstract) pairs from S2ORC",
"## Prefix\r\n\r\nThis model was trained without a prefix. In contrast to doc2query/all-with_prefix-t5-base-v1 you cannot specify what type of transformation (answer2question, review2title) etc. you will have. This can lead to a mixture of output values."
] |
text2text-generation
|
transformers
|
# doc2query/all-with_prefix-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/all-with_prefix-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
prefix = "answer2question"
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
text = prefix+": "+text
input_ids = tokenizer.encode(text, max_length=384, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 575k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a large collection of datasets. For the exact datasets names and weights see the `data_config.json` in this repository. Most of the datasets are available at [https://huggingface.co/sentence-transformers](https://huggingface.co/sentence-transformers).
The datasets include besides others:
- (title, body) pairs from [Reddit](https://huggingface.co/datasets/sentence-transformers/reddit-title-body)
- (title, body) pairs and (title, answer) pairs from StackExchange and Yahoo Answers!
- (title, review) pairs from Amazon reviews
- (query, paragraph) pairs from MS MARCO, NQ, and GooAQ
- (question, duplicate_question) from Quora and WikiAnswers
- (title, abstract) pairs from S2ORC
## Prefix
This model was trained **with a prefix**: You start the text with a specific index that defines what type out output text you would like to receive. Depending on the prefix, the output is different.
E.g. the above text about Python produces the following output:
| Prefix | Output |
| --- | --- |
| answer2question | Why should I use python in my business? ; What is the difference between Python and.NET? ; what is the python design philosophy? |
| review2title | Python a powerful and useful language ; A new and improved programming language ; Object-oriented, practical and accessibl |
| abstract2title | Python: A Software Development Platform ; A Research Guide for Python X: Conceptual Approach to Programming ; Python : Language and Approach |
| text2query | is python a low level language? ; what is the primary idea of python? ; is python a programming language? |
These are all available pre-fixes:
- text2reddit
- question2title
- answer2question
- abstract2title
- review2title
- news2title
- text2query
- question2question
For the datasets and weights for the different pre-fixes see `data_config.json` in this repository.
|
{"language": "en", "license": "apache-2.0", "datasets": ["sentence-transformers/reddit-title-body", "sentence-transformers/embedding-training-data"], "widget": [{"text": "text2reddit: Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."}]}
|
doc2query/all-with_prefix-t5-base-v1
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:sentence-transformers/reddit-title-body",
"dataset:sentence-transformers/embedding-training-data",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.08375",
"2104.08663"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #dataset-sentence-transformers/reddit-title-body #dataset-sentence-transformers/embedding-training-data #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
doc2query/all-with\_prefix-t5-base-v1
=====================================
This is a doc2query model based on T5 (also known as docT5query).
It can be used for:
* Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.
* Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
Usage
-----
Note: 'model.generate()' is non-deterministic. It produces different queries each time you run it.
Training
--------
This model fine-tuned google/t5-v1\_1-base for 575k training steps. For the training script, see the 'train\_script.py' in this repository.
The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a large collection of datasets. For the exact datasets names and weights see the 'data\_config.json' in this repository. Most of the datasets are available at URL
The datasets include besides others:
* (title, body) pairs from Reddit
* (title, body) pairs and (title, answer) pairs from StackExchange and Yahoo Answers!
* (title, review) pairs from Amazon reviews
* (query, paragraph) pairs from MS MARCO, NQ, and GooAQ
* (question, duplicate\_question) from Quora and WikiAnswers
* (title, abstract) pairs from S2ORC
Prefix
------
This model was trained with a prefix: You start the text with a specific index that defines what type out output text you would like to receive. Depending on the prefix, the output is different.
E.g. the above text about Python produces the following output:
These are all available pre-fixes:
* text2reddit
* question2title
* answer2question
* abstract2title
* review2title
* news2title
* text2query
* question2question
For the datasets and weights for the different pre-fixes see 'data\_config.json' in this repository.
|
[] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #dataset-sentence-transformers/reddit-title-body #dataset-sentence-transformers/embedding-training-data #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# doc2query/msmarco-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/msmarco-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 31k training steps (about 4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [MS MARCO Passage-Ranking dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking).
|
{"language": "en", "license": "apache-2.0", "datasets": ["sentence-transformers/embedding-training-data"], "widget": [{"text": "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."}]}
|
doc2query/msmarco-t5-base-v1
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:sentence-transformers/embedding-training-data",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.08375",
"2104.08663"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #dataset-sentence-transformers/embedding-training-data #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# doc2query/msmarco-t5-base-v1
This is a doc2query model based on T5 (also known as docT5query).
It can be used for:
- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.
- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
Note: 'model.generate()' is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned google/t5-v1_1-base for 31k training steps (about 4 epochs on the 500k training pairs from MS MARCO). For the training script, see the 'train_script.py' in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the MS MARCO Passage-Ranking dataset.
|
[
"# doc2query/msmarco-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 31k training steps (about 4 epochs on the 500k training pairs from MS MARCO). For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (query, passage) from the MS MARCO Passage-Ranking dataset."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #dataset-sentence-transformers/embedding-training-data #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# doc2query/msmarco-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 31k training steps (about 4 epochs on the 500k training pairs from MS MARCO). For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (query, passage) from the MS MARCO Passage-Ranking dataset."
] |
text2text-generation
|
transformers
|
# doc2query/msmarco-t5-small-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/msmarco-t5-small-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) for 31k training steps (about 4 epochs on the 500k training pairs from MS MARCO). For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the [MS MARCO Passage-Ranking dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking).
|
{"language": "en", "license": "apache-2.0", "datasets": ["sentence-transformers/embedding-training-data"], "widget": [{"text": "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."}]}
|
doc2query/msmarco-t5-small-v1
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:sentence-transformers/embedding-training-data",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.08375",
"2104.08663"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #dataset-sentence-transformers/embedding-training-data #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# doc2query/msmarco-t5-small-v1
This is a doc2query model based on T5 (also known as docT5query).
It can be used for:
- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.
- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
Note: 'model.generate()' is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned google/t5-v1_1-small for 31k training steps (about 4 epochs on the 500k training pairs from MS MARCO). For the training script, see the 'train_script.py' in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (query, passage) from the MS MARCO Passage-Ranking dataset.
|
[
"# doc2query/msmarco-t5-small-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-small for 31k training steps (about 4 epochs on the 500k training pairs from MS MARCO). For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (query, passage) from the MS MARCO Passage-Ranking dataset."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #dataset-sentence-transformers/embedding-training-data #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# doc2query/msmarco-t5-small-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-small for 31k training steps (about 4 epochs on the 500k training pairs from MS MARCO). For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (query, passage) from the MS MARCO Passage-Ranking dataset."
] |
text2text-generation
|
transformers
|
# doc2query/reddit-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/reddit-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 533k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, body) from Reddit.
|
{"language": "en", "license": "apache-2.0", "datasets": ["datasets/sentence-transformers/reddit-title-body"], "widget": [{"text": "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."}]}
|
doc2query/reddit-t5-base-v1
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.08375",
"2104.08663"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# doc2query/reddit-t5-base-v1
This is a doc2query model based on T5 (also known as docT5query).
It can be used for:
- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.
- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
Note: 'model.generate()' is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned google/t5-v1_1-base for 533k training steps. For the training script, see the 'train_script.py' in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, body) from Reddit.
|
[
"# doc2query/reddit-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 533k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, body) from Reddit."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# doc2query/reddit-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 533k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, body) from Reddit."
] |
text2text-generation
|
transformers
|
# doc2query/reddit-t5-small-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/reddit-t5-small-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=384, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) for 547k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, body) from Reddit.
|
{"language": "en", "license": "apache-2.0", "datasets": ["datasets/sentence-transformers/reddit-title-body"], "widget": [{"text": "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."}]}
|
doc2query/reddit-t5-small-v1
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.08375",
"2104.08663"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# doc2query/reddit-t5-small-v1
This is a doc2query model based on T5 (also known as docT5query).
It can be used for:
- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.
- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
Note: 'model.generate()' is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned google/t5-v1_1-small for 547k training steps. For the training script, see the 'train_script.py' in this repository.
The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, body) from Reddit.
|
[
"# doc2query/reddit-t5-small-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-small for 547k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, body) from Reddit."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# doc2query/reddit-t5-small-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-small for 547k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, body) from Reddit."
] |
text2text-generation
|
transformers
|
# doc2query/stackexchange-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/stackexchange-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 449k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, best_answer_pairs) from StackExchange.
|
{"language": "en", "license": "apache-2.0", "datasets": ["flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl"], "widget": [{"text": "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."}]}
|
doc2query/stackexchange-t5-base-v1
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.08375",
"2104.08663"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #dataset-flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# doc2query/stackexchange-t5-base-v1
This is a doc2query model based on T5 (also known as docT5query).
It can be used for:
- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.
- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
Note: 'model.generate()' is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned google/t5-v1_1-base for 449k training steps. For the training script, see the 'train_script.py' in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, best_answer_pairs) from StackExchange.
|
[
"# doc2query/stackexchange-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 449k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, best_answer_pairs) from StackExchange."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #dataset-flax-sentence-embeddings/stackexchange_title_best_voted_answer_jsonl #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# doc2query/stackexchange-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 449k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, best_answer_pairs) from StackExchange."
] |
text2text-generation
|
transformers
|
# doc2query/stackexchange-title-body-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/stackexchange-title-body-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 550k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, question_body) from StackExchange.
|
{"language": "en", "license": "apache-2.0", "datasets": ["flax-sentence-embeddings/stackexchange_title_body_jsonl"], "widget": [{"text": "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."}]}
|
doc2query/stackexchange-title-body-t5-base-v1
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:flax-sentence-embeddings/stackexchange_title_body_jsonl",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.08375",
"2104.08663"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #dataset-flax-sentence-embeddings/stackexchange_title_body_jsonl #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# doc2query/stackexchange-title-body-t5-base-v1
This is a doc2query model based on T5 (also known as docT5query).
It can be used for:
- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.
- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
Note: 'model.generate()' is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned google/t5-v1_1-base for 550k training steps. For the training script, see the 'train_script.py' in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, question_body) from StackExchange.
|
[
"# doc2query/stackexchange-title-body-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 550k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, question_body) from StackExchange."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #dataset-flax-sentence-embeddings/stackexchange_title_body_jsonl #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# doc2query/stackexchange-title-body-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 550k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, question_body) from StackExchange."
] |
text2text-generation
|
transformers
|
# doc2query/stackexchange-title-body-t5-small-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/stackexchange-title-body-t5-small-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=384, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) for 321k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, question_body) from StackExchange.
|
{"language": "en", "license": "apache-2.0", "datasets": ["flax-sentence-embeddings/stackexchange_title_body_jsonl"], "widget": [{"text": "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."}]}
|
doc2query/stackexchange-title-body-t5-small-v1
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:flax-sentence-embeddings/stackexchange_title_body_jsonl",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.08375",
"2104.08663"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #dataset-flax-sentence-embeddings/stackexchange_title_body_jsonl #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# doc2query/stackexchange-title-body-t5-small-v1
This is a doc2query model based on T5 (also known as docT5query).
It can be used for:
- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.
- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
Note: 'model.generate()' is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned google/t5-v1_1-small for 321k training steps. For the training script, see the 'train_script.py' in this repository.
The input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, question_body) from StackExchange.
|
[
"# doc2query/stackexchange-title-body-t5-small-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-small for 321k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, question_body) from StackExchange."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #dataset-flax-sentence-embeddings/stackexchange_title_body_jsonl #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# doc2query/stackexchange-title-body-t5-small-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-small for 321k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 384 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, question_body) from StackExchange."
] |
text2text-generation
|
transformers
|
# doc2query/yahoo_answers-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/yahoo_answers-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 111k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, answer) pairs from [Yahoo Answers](https://huggingface.co/datasets/sentence-transformers/embedding-training-data).
|
{"language": "en", "license": "apache-2.0", "datasets": ["datasets/sentence-transformers/embedding-training-data"], "widget": [{"text": "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."}]}
|
doc2query/yahoo_answers-t5-base-v1
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.08375",
"2104.08663"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# doc2query/yahoo_answers-t5-base-v1
This is a doc2query model based on T5 (also known as docT5query).
It can be used for:
- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.
- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
Note: 'model.generate()' is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned google/t5-v1_1-base for 111k training steps. For the training script, see the 'train_script.py' in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, answer) pairs from Yahoo Answers.
|
[
"# doc2query/yahoo_answers-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 111k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, answer) pairs from Yahoo Answers."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #arxiv-1904.08375 #arxiv-2104.08663 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# doc2query/yahoo_answers-t5-base-v1\r\n\r\nThis is a doc2query model based on T5 (also known as docT5query).\r\n\r\nIt can be used for:\r\n- Document expansion: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our BEIR paper we showed that BM25+docT5query is a powerful search engine. In the BEIR repository we have an example how to use docT5query with Pyserini.\r\n- Domain Specific Training Data Generation: It can be used to generate training data to learn an embedding model. On URL we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.",
"## Usage\r\n\r\n\r\nNote: 'model.generate()' is non-deterministic. It produces different queries each time you run it.",
"## Training\r\nThis model fine-tuned google/t5-v1_1-base for 111k training steps. For the training script, see the 'train_script.py' in this repository.\r\n\r\nThe input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces. \r\n\r\nThis model was trained on a (title, answer) pairs from Yahoo Answers."
] |
multiple-choice
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6045
- Accuracy: 0.7960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7494 | 1.0 | 4597 | 0.5942 | 0.7716 |
| 0.3499 | 2.0 | 9194 | 0.6045 | 0.7960 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["swag"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-finetuned-swag", "results": []}]}
|
domdomreloaded/bert-base-uncased-finetuned-swag
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #dataset-swag #license-apache-2.0 #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-swag
================================
This model is a fine-tuned version of bert-base-uncased on the swag dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6045
* Accuracy: 0.7960
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #multiple-choice #generated_from_trainer #dataset-swag #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0492
- Precision: 0.9530
- Recall: 0.9604
- F1: 0.9567
- Accuracy: 0.9889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2031 | 1.0 | 878 | 0.0560 | 0.9381 | 0.9445 | 0.9413 | 0.9858 |
| 0.0446 | 2.0 | 1756 | 0.0480 | 0.9510 | 0.9578 | 0.9544 | 0.9887 |
| 0.0263 | 3.0 | 2634 | 0.0492 | 0.9530 | 0.9604 | 0.9567 | 0.9889 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "roberta-base-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9529566113766282, "name": "Precision"}, {"type": "recall", "value": 0.9604268983755194, "name": "Recall"}, {"type": "f1", "value": 0.9566771720212616, "name": "F1"}, {"type": "accuracy", "value": 0.988938664048357, "name": "Accuracy"}]}]}]}
|
dominiqueblok/roberta-base-finetuned-ner
| null |
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #roberta #token-classification #generated_from_trainer #dataset-conll2003 #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
|
roberta-base-finetuned-ner
==========================
This model is a fine-tuned version of roberta-base on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0492
* Precision: 0.9530
* Recall: 0.9604
* F1: 0.9567
* Accuracy: 0.9889
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.10.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #roberta #token-classification #generated_from_trainer #dataset-conll2003 #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.0\n* Tokenizers 0.10.3"
] |
null | null |
# this is a shit model
|
{}
|
douglas0204/shitmodel
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# this is a shit model
|
[
"# this is a shit model"
] |
[
"TAGS\n#region-us \n",
"# this is a shit model"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-clm-employment
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3283 | 1.0 | 3989 | 1.9578 |
| 2.0824 | 2.0 | 7978 | 1.9013 |
| 1.9936 | 3.0 | 11967 | 1.8625 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "finetune-clm-employment", "results": []}]}
|
dpasch01/finetune-clm-employment
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetune-clm-employment
=======================
This model is a fine-tuned version of distilroberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.8445
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.14.1
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.14.1\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-data-skills
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7239 | 1.0 | 3926 | 2.2459 |
| 2.3113 | 2.0 | 7852 | 2.1255 |
| 2.197 | 3.0 | 11778 | 2.0966 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "finetune-data-skills", "results": []}]}
|
dpasch01/finetune-data-skills
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetune-data-skills
====================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1058
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
image-classification
|
transformers
|
# Infrastructures
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Cooling tower

#### Transmission grid

#### Wind turbines

|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
drab/Infrastructures
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# Infrastructures
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### Cooling tower
!Cooling tower
#### Transmission grid
!Transmission grid
#### Wind turbines
!Wind turbines
|
[
"# Infrastructures\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### Cooling tower\n\n!Cooling tower",
"#### Transmission grid\n\n!Transmission grid",
"#### Wind turbines\n\n!Wind turbines"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# Infrastructures\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### Cooling tower\n\n!Cooling tower",
"#### Transmission grid\n\n!Transmission grid",
"#### Wind turbines\n\n!Wind turbines"
] |
null |
transformers
|
这是一个git lfs项目。
没有改造数据前的模型性能:
knowledge points - max length is 1566, min length is 3, ave length is 87.96, 95% quantile is 490.
question and answer - max length is 303, min length is 8, ave length is 47.09, 95% quantile is 119.
303精度为:2562/5232=48.97%
|
{}
|
dragonStyle/bert-303-step35000
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #endpoints_compatible #region-us
|
这是一个git lfs项目。
没有改造数据前的模型性能:
knowledge points - max length is 1566, min length is 3, ave length is 87.96, 95% quantile is 490.
question and answer - max length is 303, min length is 8, ave length is 47.09, 95% quantile is 119.
303精度为:2562/5232=48.97%
|
[] |
[
"TAGS\n#transformers #pytorch #bert #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Base-Pretrain-Vietnamese
The base model is pre-trained on 16kHz sampled speech audio from 100h Vietnamese unlabelled data in [VLSP dataset](https://drive.google.com/file/d/1vUSxdORDxk-ePUt-bUVDahpoXiqKchMx/view?usp=sharing). When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Vietnamese Automatic Speech Recognition.
[Facebook's Wav2Vec2 blog](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
[Paper](https://arxiv.org/abs/2006.11477)
# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the English pre-trained model.
|
{"language": "vi", "license": "apache-2.0", "tags": ["speech", "automatic-speech-recognition"], "datasets": ["vlsp"]}
|
dragonSwing/viwav2vec2-base-100h
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"automatic-speech-recognition",
"vi",
"dataset:vlsp",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.11477"
] |
[
"vi"
] |
TAGS
#transformers #pytorch #wav2vec2 #pretraining #speech #automatic-speech-recognition #vi #dataset-vlsp #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Base-Pretrain-Vietnamese
The base model is pre-trained on 16kHz sampled speech audio from 100h Vietnamese unlabelled data in VLSP dataset. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Vietnamese Automatic Speech Recognition.
Facebook's Wav2Vec2 blog
Paper
# Usage
See this notebook for more information on how to fine-tune the English pre-trained model.
|
[
"# Wav2Vec2-Base-Pretrain-Vietnamese\nThe base model is pre-trained on 16kHz sampled speech audio from 100h Vietnamese unlabelled data in VLSP dataset. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Vietnamese Automatic Speech Recognition. \nFacebook's Wav2Vec2 blog\nPaper",
"# Usage\nSee this notebook for more information on how to fine-tune the English pre-trained model."
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #pretraining #speech #automatic-speech-recognition #vi #dataset-vlsp #arxiv-2006.11477 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Base-Pretrain-Vietnamese\nThe base model is pre-trained on 16kHz sampled speech audio from 100h Vietnamese unlabelled data in VLSP dataset. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Vietnamese Automatic Speech Recognition. \nFacebook's Wav2Vec2 blog\nPaper",
"# Usage\nSee this notebook for more information on how to fine-tune the English pre-trained model."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned [dragonSwing/wav2vec2-base-pretrain-vietnamese](https://huggingface.co/dragonSwing/wav2vec2-base-pretrain-vietnamese) on Vietnamese Speech Recognition task using 100h labelled data from [VSLP dataset](https://drive.google.com/file/d/1vUSxdORDxk-ePUt-bUVDahpoXiqKchMx/view?usp=sharing).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "vi", split="test")
processor = Wav2Vec2Processor.from_pretrained("dragonSwing/wav2vec2-base-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("dragonSwing/wav2vec2-base-vietnamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "vi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("dragonSwing/wav2vec2-base-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("dragonSwing/wav2vec2-base-vietnamese")
model.to("cuda")
chars_to_ignore_regex = r'[,?.!\-;:"“%\'�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=1)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 31.353591%
|
{"language": "vi", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech"], "datasets": ["vlsp", "common_voice"], "metrics": ["wer"], "model-index": [{"name": "Wav2vec2 Base Vietnamese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice vi", "type": "common_voice", "args": "vi"}, "metrics": [{"type": "wer", "value": 31.353591, "name": "Test WER"}]}]}]}
|
dragonSwing/wav2vec2-base-vietnamese
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"vi",
"dataset:vlsp",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"vi"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #vi #dataset-vlsp #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Vietnamese
Fine-tuned dragonSwing/wav2vec2-base-pretrain-vietnamese on Vietnamese Speech Recognition task using 100h labelled data from VSLP dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
Test Result: 31.353591%
|
[
"# Wav2Vec2-Large-XLSR-53-Vietnamese\nFine-tuned dragonSwing/wav2vec2-base-pretrain-vietnamese on Vietnamese Speech Recognition task using 100h labelled data from VSLP dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Vietnamese test data of Common Voice.\n\nTest Result: 31.353591%"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #vi #dataset-vlsp #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Vietnamese\nFine-tuned dragonSwing/wav2vec2-base-pretrain-vietnamese on Vietnamese Speech Recognition task using 100h labelled data from VSLP dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Vietnamese test data of Common Voice.\n\nTest Result: 31.353591%"
] |
automatic-speech-recognition
|
speechbrain
|
# Wav2Vec2-Base-Vietnamese-270h
Fine-tuned Wav2Vec2 model on Vietnamese Speech Recognition task using about 270h labelled data combined from multiple datasets including [Common Voice](https://huggingface.co/datasets/common_voice), [VIVOS](https://huggingface.co/datasets/vivos), [VLSP2020](https://vlsp.org.vn/vlsp2020/eval/asr). The model was fine-tuned using SpeechBrain toolkit with a custom tokenizer. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io/).
When using this model, make sure that your speech input is sampled at 16kHz.
Please refer to [huggingface blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) or [speechbrain](https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonVoice/ASR/CTC) on how to fine-tune Wav2Vec2 model on a specific language.
### Benchmark WER result:
| | [VIVOS](https://huggingface.co/datasets/vivos) | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) |
|---|---|---|---|
|without LM| 8.23 | 12.15 | 12.15 |
|with 4-grams LM| 3.70 | 5.57 | 5.76 |
The language model was trained using [OSCAR](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109) dataset on about 32GB of crawled text.
### Install SpeechBrain
To use this model, you should install speechbrain > 0.5.10
### Usage
The model can be used directly (without a language model) as follows:
```python
from speechbrain.pretrained import EncoderASR
model = EncoderASR.from_hparams(source="dragonSwing/wav2vec2-base-vn-270h", savedir="pretrained_models/asr-wav2vec2-vi")
model.transcribe_file('dragonSwing/wav2vec2-base-vn-270h/example.mp3')
# Output: được hồ chí minh coi là một động lực lớn của sự phát triển đất nước
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice 8.0.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric, Audio
from transformers import Wav2Vec2FeatureExtractor
from speechbrain.pretrained import EncoderASR
import re
test_dataset = load_dataset("mozilla-foundation/common_voice_8_0", "vi", split="test", use_auth_token=True)
test_dataset = test_dataset.cast_column("audio", Audio(sampling_rate=16_000))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
wer = load_metric("wer")
extractor = Wav2Vec2FeatureExtractor.from_pretrained("dragonSwing/wav2vec2-base-vn-270h")
model = EncoderASR.from_hparams(source="dragonSwing/wav2vec2-base-vn-270h", savedir="pretrained_models/asr-wav2vec2-vi", run_opts={'device': device})
chars_to_ignore_regex = r'[,?.!\-;:"“%\'�]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
audio = batch["audio"]
batch["target_text"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
batch['speech'] = audio['array']
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
# For padding inputs only
inputs = extractor(
batch['speech'],
sampling_rate=16000,
return_tensors="pt",
padding=True,
do_normalize=False
).input_values
input_lens = torch.ones(inputs.shape[0])
pred_str, pred_tokens = model.transcribe_batch(inputs, input_lens)
batch["pred_strings"] = pred_str
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=1)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["target_text"])))
```
**Test Result**: 12.155553%
#### Citation
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: [https://speechbrain.github.io](https://speechbrain.github.io/)
GitHub: [https://github.com/speechbrain/speechbrain](https://github.com/speechbrain/speechbrain)
|
{"language": "vi", "license": "cc-by-nc-4.0", "tags": ["audio", "speech", "speechbrain", "Transformer"], "datasets": ["vivos", "common_voice"], "metrics": ["wer"], "pipeline_tag": "automatic-speech-recognition", "widget": [{"example_title": "Example 1", "src": "https://huggingface.co/dragonSwing/wav2vec2-base-vn-270h/raw/main/example.mp3"}, {"example_title": "Example 2", "src": "https://huggingface.co/dragonSwing/wav2vec2-base-vn-270h/raw/main/example2.mp3"}], "model-index": [{"name": "Wav2vec2 Base Vietnamese 270h", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice vi", "type": "common_voice", "args": "vi"}, "metrics": [{"type": "wer", "value": 9.66, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 7.0", "type": "mozilla-foundation/common_voice_7_0", "args": "vi"}, "metrics": [{"type": "wer", "value": 5.57, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "vi"}, "metrics": [{"type": "wer", "value": 5.76, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "VIVOS", "type": "vivos", "args": "vi"}, "metrics": [{"type": "wer", "value": 3.7, "name": "Test WER"}]}]}]}
|
dragonSwing/wav2vec2-base-vn-270h
| null |
[
"speechbrain",
"wav2vec2",
"audio",
"speech",
"Transformer",
"automatic-speech-recognition",
"vi",
"dataset:vivos",
"dataset:common_voice",
"license:cc-by-nc-4.0",
"model-index",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"vi"
] |
TAGS
#speechbrain #wav2vec2 #audio #speech #Transformer #automatic-speech-recognition #vi #dataset-vivos #dataset-common_voice #license-cc-by-nc-4.0 #model-index #has_space #region-us
|
Wav2Vec2-Base-Vietnamese-270h
=============================
Fine-tuned Wav2Vec2 model on Vietnamese Speech Recognition task using about 270h labelled data combined from multiple datasets including Common Voice, VIVOS, VLSP2020. The model was fine-tuned using SpeechBrain toolkit with a custom tokenizer. For a better experience, we encourage you to learn more about SpeechBrain.
When using this model, make sure that your speech input is sampled at 16kHz.
Please refer to huggingface blog or speechbrain on how to fine-tune Wav2Vec2 model on a specific language.
### Benchmark WER result:
The language model was trained using OSCAR dataset on about 32GB of crawled text.
### Install SpeechBrain
To use this model, you should install speechbrain > 0.5.10
### Usage
The model can be used directly (without a language model) as follows:
### Inference on GPU
To perform inference on the GPU, add 'run\_opts={"device":"cuda"}' when calling the 'from\_hparams' method.
### Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice 8.0.
Test Result: 12.155553%
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: URL
GitHub: URL
|
[
"### Benchmark WER result:\n\n\n\nThe language model was trained using OSCAR dataset on about 32GB of crawled text.",
"### Install SpeechBrain\n\n\nTo use this model, you should install speechbrain > 0.5.10",
"### Usage\n\n\nThe model can be used directly (without a language model) as follows:",
"### Inference on GPU\n\n\nTo perform inference on the GPU, add 'run\\_opts={\"device\":\"cuda\"}' when calling the 'from\\_hparams' method.",
"### Evaluation\n\n\nThe model can be evaluated as follows on the Vietnamese test data of Common Voice 8.0.\n\n\nTest Result: 12.155553%",
"#### About SpeechBrain\n\n\nSpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains. \n\nWebsite: URL \n\nGitHub: URL"
] |
[
"TAGS\n#speechbrain #wav2vec2 #audio #speech #Transformer #automatic-speech-recognition #vi #dataset-vivos #dataset-common_voice #license-cc-by-nc-4.0 #model-index #has_space #region-us \n",
"### Benchmark WER result:\n\n\n\nThe language model was trained using OSCAR dataset on about 32GB of crawled text.",
"### Install SpeechBrain\n\n\nTo use this model, you should install speechbrain > 0.5.10",
"### Usage\n\n\nThe model can be used directly (without a language model) as follows:",
"### Inference on GPU\n\n\nTo perform inference on the GPU, add 'run\\_opts={\"device\":\"cuda\"}' when calling the 'from\\_hparams' method.",
"### Evaluation\n\n\nThe model can be evaluated as follows on the Vietnamese test data of Common Voice 8.0.\n\n\nTest Result: 12.155553%",
"#### About SpeechBrain\n\n\nSpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains. \n\nWebsite: URL \n\nGitHub: URL"
] |
fill-mask
|
transformers
|
# ALBert
The ALR-Bert , **cased** model for Romanian, trained on a 15GB corpus!
ALR-BERT is a multi-layer bidirectional Transformer encoder that shares ALBERT's factorized embedding parameterization and cross-layer sharing. ALR-BERT-base inherits ALBERT-base and features 12 parameter-sharing layers, a 128-dimension embedding size, 768 hidden units, 12 heads, and GELU non-linearities. Masked language modeling (MLM) and sentence order prediction (SOP) losses are the two objectives that ALBERT is pre-trained on. For ALR-BERT, we preserve both these objectives.
The model was trained using 40 batches per GPU (for 128 sequence length) and then 20 batches per GPU (for 512 sequence length). Layer-wise Adaptive Moments optimizer for Batch (LAMB) training was utilized, with a warm-up over the first 1\% of steps up to a learning rate of 1e4, then a decay. Eight NVIDIA Tesla V100 SXM3 with 32GB memory were used, and the pre-training process took around 2 weeks per model.
Training methodology follows closely work previous done in Romanian Bert (https://huggingface.co/dumitrescustefan/bert-base-romanian-cased-v1)
### How to use
```python
from transformers import AutoTokenizer, AutoModel
import torch
# load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("dragosnicolae555/ALR_BERT")
model = AutoModel.from_pretrained("dragosnicolae555/ALR_BERT")
#Here add your magic
```
Remember to always sanitize your text! Replace ``s`` and ``t`` cedilla-letters to comma-letters with :
```
text = text.replace("ţ", "ț").replace("ş", "ș").replace("Ţ", "Ț").replace("Ş", "Ș")
```
because the model was **NOT** trained on cedilla ``s`` and ``t``s. If you don't, you will have decreased performance due to <UNK>s and increased number of tokens per word.
### Evaluation
Here, we evaluate ALR-BERT on Simple Universal Dependencies task. One model for each task, evaluating labeling performance on the UPOS (Universal Part-of-Speech) and the XPOS (Extended Part-of-Speech) (eXtended Part-of-Speech). We compare our proposed ALR-BERT with Romanian BERT and multiligual BERT, using the cased version. To counteract the random seed effect, we repeat each experiment five times and simply provide the mean score.
| Model | UPOS | XPOS | MLAS | AllTags |
|--------------------------------|:-----:|:------:|:-----:|:-----:|
| M-BERT (cased) | 93.87 | 89.89 | 90.01 | 87.04|
| Romanian BERT (cased) | 95.56 | 95.35 | 92.78 | 93.22 |
| ALR-BERT (cased) | **87.38** | **84.05** | **79.82** | **78.82**|
### Corpus
The model is trained on the following corpora (stats in the table below are after cleaning):
| Corpus | Lines(M) | Words(M) | Chars(B) | Size(GB) |
|----------- |:--------: |:--------: |:--------: |:--------: |
| OPUS | 55.05 | 635.04 | 4.045 | 3.8 |
| OSCAR | 33.56 | 1725.82 | 11.411 | 11 |
| Wikipedia | 1.54 | 60.47 | 0.411 | 0.4 |
| **Total** | **90.15** | **2421.33** | **15.867** | **15.2** |
|
{"language": "ro"}
|
dragosnicolae555/ALR_BERT
| null |
[
"transformers",
"pytorch",
"albert",
"fill-mask",
"ro",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ro"
] |
TAGS
#transformers #pytorch #albert #fill-mask #ro #autotrain_compatible #endpoints_compatible #region-us
|
ALBert
======
The ALR-Bert , cased model for Romanian, trained on a 15GB corpus!
ALR-BERT is a multi-layer bidirectional Transformer encoder that shares ALBERT's factorized embedding parameterization and cross-layer sharing. ALR-BERT-base inherits ALBERT-base and features 12 parameter-sharing layers, a 128-dimension embedding size, 768 hidden units, 12 heads, and GELU non-linearities. Masked language modeling (MLM) and sentence order prediction (SOP) losses are the two objectives that ALBERT is pre-trained on. For ALR-BERT, we preserve both these objectives.
The model was trained using 40 batches per GPU (for 128 sequence length) and then 20 batches per GPU (for 512 sequence length). Layer-wise Adaptive Moments optimizer for Batch (LAMB) training was utilized, with a warm-up over the first 1% of steps up to a learning rate of 1e4, then a decay. Eight NVIDIA Tesla V100 SXM3 with 32GB memory were used, and the pre-training process took around 2 weeks per model.
Training methodology follows closely work previous done in Romanian Bert (URL
### How to use
Remember to always sanitize your text! Replace ''s'' and ''t'' cedilla-letters to comma-letters with :
because the model was NOT trained on cedilla ''s'' and ''t''s. If you don't, you will have decreased performance due to s and increased number of tokens per word.
### Evaluation
Here, we evaluate ALR-BERT on Simple Universal Dependencies task. One model for each task, evaluating labeling performance on the UPOS (Universal Part-of-Speech) and the XPOS (Extended Part-of-Speech) (eXtended Part-of-Speech). We compare our proposed ALR-BERT with Romanian BERT and multiligual BERT, using the cased version. To counteract the random seed effect, we repeat each experiment five times and simply provide the mean score.
### Corpus
The model is trained on the following corpora (stats in the table below are after cleaning):
|
[
"### How to use\n\n\nRemember to always sanitize your text! Replace ''s'' and ''t'' cedilla-letters to comma-letters with :\n\n\nbecause the model was NOT trained on cedilla ''s'' and ''t''s. If you don't, you will have decreased performance due to s and increased number of tokens per word.",
"### Evaluation\n\n\nHere, we evaluate ALR-BERT on Simple Universal Dependencies task. One model for each task, evaluating labeling performance on the UPOS (Universal Part-of-Speech) and the XPOS (Extended Part-of-Speech) (eXtended Part-of-Speech). We compare our proposed ALR-BERT with Romanian BERT and multiligual BERT, using the cased version. To counteract the random seed effect, we repeat each experiment five times and simply provide the mean score.",
"### Corpus\n\n\nThe model is trained on the following corpora (stats in the table below are after cleaning):"
] |
[
"TAGS\n#transformers #pytorch #albert #fill-mask #ro #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nRemember to always sanitize your text! Replace ''s'' and ''t'' cedilla-letters to comma-letters with :\n\n\nbecause the model was NOT trained on cedilla ''s'' and ''t''s. If you don't, you will have decreased performance due to s and increased number of tokens per word.",
"### Evaluation\n\n\nHere, we evaluate ALR-BERT on Simple Universal Dependencies task. One model for each task, evaluating labeling performance on the UPOS (Universal Part-of-Speech) and the XPOS (Extended Part-of-Speech) (eXtended Part-of-Speech). We compare our proposed ALR-BERT with Romanian BERT and multiligual BERT, using the cased version. To counteract the random seed effect, we repeat each experiment five times and simply provide the mean score.",
"### Corpus\n\n\nThe model is trained on the following corpora (stats in the table below are after cleaning):"
] |
null | null |
Pretrained model on Dagaare language using a masked language modeling (MLM) objective first introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta)\
|
{"datasets": ["Bible"]}
|
drcod/DagaareBERTa
| null |
[
"pytorch",
"tf",
"dataset:Bible",
"arxiv:1907.11692",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[] |
TAGS
#pytorch #tf #dataset-Bible #arxiv-1907.11692 #region-us
|
Pretrained model on Dagaare language using a masked language modeling (MLM) objective first introduced in
this paper and first released in
this repository\
|
[] |
[
"TAGS\n#pytorch #tf #dataset-Bible #arxiv-1907.11692 #region-us \n"
] |
text-generation
|
transformers
|
# My Awesome Model
|
{"tags": ["conversational"]}
|
dreamline2/DialoGPT-small-joshua-demo
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
text-classification
|
transformers
|
This is just a test
|
{}
|
dreji18/mymodel
| null |
[
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
This is just a test
|
[] |
[
"TAGS\n#transformers #tf #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29797722
- CO2 Emissions (in grams): 2.7516207978192737
## Validation Metrics
- Loss: 0.6113826036453247
- Accuracy: 0.7559139784946236
- Macro F1: 0.4594734612976928
- Micro F1: 0.7559139784946236
- Weighted F1: 0.7195080232106192
- Macro Precision: 0.7175166413412577
- Micro Precision: 0.7559139784946236
- Weighted Precision: 0.7383048259333735
- Macro Recall: 0.4482203645846237
- Micro Recall: 0.7559139784946236
- Weighted Recall: 0.7559139784946236
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/ds198799/autonlp-predict_ROI_1-29797722
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ds198799/autonlp-predict_ROI_1-29797722", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ds198799/autonlp-predict_ROI_1-29797722", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["ds198799/autonlp-data-predict_ROI_1"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 2.7516207978192737}
|
ds198799/autonlp-predict_ROI_1-29797722
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:ds198799/autonlp-data-predict_ROI_1",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-ds198799/autonlp-data-predict_ROI_1 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29797722
- CO2 Emissions (in grams): 2.7516207978192737
## Validation Metrics
- Loss: 0.6113826036453247
- Accuracy: 0.7559139784946236
- Macro F1: 0.4594734612976928
- Micro F1: 0.7559139784946236
- Weighted F1: 0.7195080232106192
- Macro Precision: 0.7175166413412577
- Micro Precision: 0.7559139784946236
- Weighted Precision: 0.7383048259333735
- Macro Recall: 0.4482203645846237
- Micro Recall: 0.7559139784946236
- Weighted Recall: 0.7559139784946236
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 29797722\n- CO2 Emissions (in grams): 2.7516207978192737",
"## Validation Metrics\n\n- Loss: 0.6113826036453247\n- Accuracy: 0.7559139784946236\n- Macro F1: 0.4594734612976928\n- Micro F1: 0.7559139784946236\n- Weighted F1: 0.7195080232106192\n- Macro Precision: 0.7175166413412577\n- Micro Precision: 0.7559139784946236\n- Weighted Precision: 0.7383048259333735\n- Macro Recall: 0.4482203645846237\n- Micro Recall: 0.7559139784946236\n- Weighted Recall: 0.7559139784946236",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-ds198799/autonlp-data-predict_ROI_1 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 29797722\n- CO2 Emissions (in grams): 2.7516207978192737",
"## Validation Metrics\n\n- Loss: 0.6113826036453247\n- Accuracy: 0.7559139784946236\n- Macro F1: 0.4594734612976928\n- Micro F1: 0.7559139784946236\n- Weighted F1: 0.7195080232106192\n- Macro Precision: 0.7175166413412577\n- Micro Precision: 0.7559139784946236\n- Weighted Precision: 0.7383048259333735\n- Macro Recall: 0.4482203645846237\n- Micro Recall: 0.7559139784946236\n- Weighted Recall: 0.7559139784946236",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29797730
- CO2 Emissions (in grams): 2.2439127664461718
## Validation Metrics
- Loss: 0.6314184069633484
- Accuracy: 0.7596774193548387
- Macro F1: 0.4740565300039588
- Micro F1: 0.7596774193548386
- Weighted F1: 0.7371623804622154
- Macro Precision: 0.6747804619412134
- Micro Precision: 0.7596774193548387
- Weighted Precision: 0.7496542175358931
- Macro Recall: 0.47743727441146655
- Micro Recall: 0.7596774193548387
- Weighted Recall: 0.7596774193548387
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/ds198799/autonlp-predict_ROI_1-29797730
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ds198799/autonlp-predict_ROI_1-29797730", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ds198799/autonlp-predict_ROI_1-29797730", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["ds198799/autonlp-data-predict_ROI_1"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 2.2439127664461718}
|
ds198799/autonlp-predict_ROI_1-29797730
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:ds198799/autonlp-data-predict_ROI_1",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-ds198799/autonlp-data-predict_ROI_1 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 29797730
- CO2 Emissions (in grams): 2.2439127664461718
## Validation Metrics
- Loss: 0.6314184069633484
- Accuracy: 0.7596774193548387
- Macro F1: 0.4740565300039588
- Micro F1: 0.7596774193548386
- Weighted F1: 0.7371623804622154
- Macro Precision: 0.6747804619412134
- Micro Precision: 0.7596774193548387
- Weighted Precision: 0.7496542175358931
- Macro Recall: 0.47743727441146655
- Micro Recall: 0.7596774193548387
- Weighted Recall: 0.7596774193548387
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 29797730\n- CO2 Emissions (in grams): 2.2439127664461718",
"## Validation Metrics\n\n- Loss: 0.6314184069633484\n- Accuracy: 0.7596774193548387\n- Macro F1: 0.4740565300039588\n- Micro F1: 0.7596774193548386\n- Weighted F1: 0.7371623804622154\n- Macro Precision: 0.6747804619412134\n- Micro Precision: 0.7596774193548387\n- Weighted Precision: 0.7496542175358931\n- Macro Recall: 0.47743727441146655\n- Micro Recall: 0.7596774193548387\n- Weighted Recall: 0.7596774193548387",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-ds198799/autonlp-data-predict_ROI_1 #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 29797730\n- CO2 Emissions (in grams): 2.2439127664461718",
"## Validation Metrics\n\n- Loss: 0.6314184069633484\n- Accuracy: 0.7596774193548387\n- Macro F1: 0.4740565300039588\n- Micro F1: 0.7596774193548386\n- Weighted F1: 0.7371623804622154\n- Macro Precision: 0.6747804619412134\n- Micro Precision: 0.7596774193548387\n- Weighted Precision: 0.7496542175358931\n- Macro Recall: 0.47743727441146655\n- Micro Recall: 0.7596774193548387\n- Weighted Recall: 0.7596774193548387",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2002 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1458
- Precision: 0.7394
- Recall: 0.7884
- F1: 0.7631
- Accuracy: 0.9656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1047 | 1.0 | 1041 | 0.1516 | 0.7173 | 0.7505 | 0.7335 | 0.9602 |
| 0.068 | 2.0 | 2082 | 0.1280 | 0.7470 | 0.7888 | 0.7673 | 0.9664 |
| 0.0406 | 3.0 | 3123 | 0.1458 | 0.7394 | 0.7884 | 0.7631 | 0.9656 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2002"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2002", "type": "conll2002", "args": "es"}, "metrics": [{"type": "precision", "value": 0.7394396551724138, "name": "Precision"}, {"type": "recall", "value": 0.7883731617647058, "name": "Recall"}, {"type": "f1", "value": 0.7631227758007118, "name": "F1"}, {"type": "accuracy", "value": 0.9655744705631151, "name": "Accuracy"}]}]}]}
|
dshvadskiy/bert-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2002",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2002 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
bert-finetuned-ner
==================
This model is a fine-tuned version of bert-base-cased on the conll2002 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1458
* Precision: 0.7394
* Recall: 0.7884
* F1: 0.7631
* Accuracy: 0.9656
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2002 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
This model can be used to more accurately detokenize the moses tokenizer (it does a better job with certain lossy quotes and things)
batched usage:
```python
sentences = [
"They 're a young team . they have great players and amazing freshmen coming in , so think they 'll grow into themselves next year ,",
"\" We 'll talk go by now ; \" says Shucksmith ;",
"He 'll enjoy it more now that this he be dead , if put 'll pardon the expression .",
"I think you 'll be amazed at this way it finds ,",
"Michigan voters ^ are so frightened of fallen in permanent economic collapse that they 'll grab onto anything .",
"You 'll finding outs episode 4 .",
"\" Warren Gatland is a professional person and it wasn 't a case of 's I 'll phone my mate Rob up to if he wants a coaching job ' , he would done a fair amount of homework about , \" Howley air said .",
"You can look at the things I 'm saying about my record and about the events of campaign and history and you 'll find if now and and then I miss a words or I get something slightly off , I 'll correct it , acknowledge where it are wrong .",
"Wonder if 'll alive to see .",
"We 'll have to combine and a numbered of people ."
]
def sentences_to_input_tokens(sentences):
all_tokens = []
max_length = 0
sents_tokens = []
iids = tokenizer(sentences)
for sent_tokens in iids['input_ids']:
sents_tokens.append(sent_tokens)
if len(sent_tokens) > max_length:
max_length = len(sent_tokens)
attention_mask = [1] * len(sent_tokens)
pos_ids = list(range(len(sent_tokens)))
encoding = {
"iids": sent_tokens,
"am": attention_mask,
"pos": pos_ids
}
all_tokens.append(encoding)
input_ids = []
attention_masks = []
position_ids = []
for i in range(len(all_tokens)):
encoding = all_tokens[i]
pad_len = max_length - len(encoding['iids'])
attention_masks.append(encoding['am'] + [0] * pad_len)
position_ids.append(encoding['pos'] + [0] * pad_len)
input_ids.append(encoding['iids'] + [tokenizer.pad_token_id] * pad_len)
encoding = {
"input_ids": torch.tensor(input_ids).to(device),
"attention_mask": torch.tensor(attention_masks).to(device),
"position_ids": torch.tensor(position_ids).to(device)
}
return encoding, sents_tokens
def run_token_predictor_sentences(sentences):
encoding, at = sentences_to_input_tokens(sentences)
predictions = model(**encoding)[0].cpu().tolist()
outstrs = []
for i in range(len(predictions)):
outstr = ""
for p in zip(tokenizer.convert_ids_to_tokens(at[i][1:-1]), predictions[i][1:-1]):
if not "▁" in p[0]:
outstr+=p[0]
else:
if p[1][0] > p[1][1]:
outstr+=p[0].replace("▁", " ")
else:
outstr+=p[0].replace("▁", "")
outstrs.append(outstr.strip())
return outstrs
outs = run_token_predictor_sentences(sentences)
for p in zip(outs, sentences):
print(p[1])
print(p[0])
print('\n------\n')
```
|
{"language": "en", "widget": [{"text": "They 're a young team . they have great players and amazing freshmen coming in , so think they 'll grow into themselves next year ,"}, {"text": "\" We 'll talk go by now ; \" says Shucksmith ;"}, {"text": "\" Warren Gatland is a professional person and it wasn 't a case of 's I 'll phone my mate Rob up to if he wants a coaching job ' , he would done a fair amount of homework about , \" Howley air said ."}]}
|
dsilin/detok-deberta-xl
| null |
[
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #deberta-v2 #token-classification #en #autotrain_compatible #endpoints_compatible #region-us
|
This model can be used to more accurately detokenize the moses tokenizer (it does a better job with certain lossy quotes and things)
batched usage:
|
[] |
[
"TAGS\n#transformers #pytorch #deberta-v2 #token-classification #en #autotrain_compatible #endpoints_compatible #region-us \n"
] |
token-classification
|
transformers
|
# bert-base-NER
## Model description
**bert-base-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).
Specifically, this model is a *bert-base-cased* model that was fine-tuned on the English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
If you'd like to use a larger BERT-large model fine-tuned on the same dataset, a [**bert-large-NER**](https://huggingface.co/dslim/bert-large-NER/) version is also available.
### Available NER models
| Model Name | Description | Parameters |
|-------------------|-------------|------------------|
| [distilbert-NER](https://huggingface.co/dslim/distilbert-NER) **(NEW!)** | Fine-tuned DistilBERT - a smaller, faster, lighter version of BERT | 66M |
| [bert-large-NER](https://huggingface.co/dslim/bert-large-NER/) | Fine-tuned bert-large-cased - larger model with slightly better performance | 340M |
| [bert-base-NER](https://huggingface.co/dslim/bert-base-NER)-([uncased](https://huggingface.co/dslim/bert-base-NER-uncased)) | Fine-tuned bert-base, available in both cased and uncased versions | 110M |
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER")
model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
## Training data
This model was fine-tuned on English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-MISC |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MISC | Miscellaneous entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location
### CoNLL-2003 English Dataset Statistics
This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.
#### # of training examples per entity type
Dataset|LOC|MISC|ORG|PER
-|-|-|-|-
Train|7140|3438|6321|6600
Dev|1837|922|1341|1842
Test|1668|702|1661|1617
#### # of articles/sentences/tokens per dataset
Dataset |Articles |Sentences |Tokens
-|-|-|-
Train |946 |14,987 |203,621
Dev |216 |3,466 |51,362
Test |231 |3,684 |46,435
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805) which trained & evaluated the model on CoNLL-2003 NER task.
## Eval results
metric|dev|test
-|-|-
f1 |95.1 |91.3
precision |95.0 |90.7
recall |95.3 |91.9
The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results [here](https://github.com/google-research/bert/issues/223).
### BibTeX entry and citation info
```
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and
De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
}
```
|
{"language": "en", "license": "mit", "datasets": ["conll2003"], "model-index": [{"name": "dslim/bert-base-NER", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "config": "conll2003", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9118041001560013, "name": "Accuracy", "verified": true}, {"type": "precision", "value": 0.9211550382257732, "name": "Precision", "verified": true}, {"type": "recall", "value": 0.9306415698281261, "name": "Recall", "verified": true}, {"type": "f1", "value": 0.9258740048459675, "name": "F1", "verified": true}, {"type": "loss", "value": 0.48325642943382263, "name": "loss", "verified": true}]}]}]}
|
dslim/bert-base-NER
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"bert",
"token-classification",
"en",
"dataset:conll2003",
"arxiv:1810.04805",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1810.04805"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #onnx #safetensors #bert #token-classification #en #dataset-conll2003 #arxiv-1810.04805 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
|
bert-base-NER
=============
Model description
-----------------
bert-base-NER is a fine-tuned BERT model that is ready to use for Named Entity Recognition and achieves state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).
Specifically, this model is a *bert-base-cased* model that was fine-tuned on the English version of the standard CoNLL-2003 Named Entity Recognition dataset.
If you'd like to use a larger BERT-large model fine-tuned on the same dataset, a bert-large-NER version is also available.
### Available NER models
Model Name: distilbert-NER (NEW!), Description: Fine-tuned DistilBERT - a smaller, faster, lighter version of BERT, Parameters: 66M
Model Name: bert-large-NER, Description: Fine-tuned bert-large-cased - larger model with slightly better performance, Parameters: 340M
Model Name: bert-base-NER-(uncased), Description: Fine-tuned bert-base, available in both cased and uncased versions, Parameters: 110M
Intended uses & limitations
---------------------------
#### How to use
You can use this model with Transformers *pipeline* for NER.
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
Training data
-------------
This model was fine-tuned on English version of the standard CoNLL-2003 Named Entity Recognition dataset.
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
### CoNLL-2003 English Dataset Statistics
This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.
#### # of training examples per entity type
#### # of articles/sentences/tokens per dataset
Training procedure
------------------
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original BERT paper which trained & evaluated the model on CoNLL-2003 NER task.
Eval results
------------
metric: f1, dev: 95.1, test: 91.3
metric: precision, dev: 95.0, test: 90.7
metric: recall, dev: 95.3, test: 91.9
The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results here.
### BibTeX entry and citation info
|
[
"### Available NER models\n\n\nModel Name: distilbert-NER (NEW!), Description: Fine-tuned DistilBERT - a smaller, faster, lighter version of BERT, Parameters: 66M\nModel Name: bert-large-NER, Description: Fine-tuned bert-large-cased - larger model with slightly better performance, Parameters: 340M\nModel Name: bert-base-NER-(uncased), Description: Fine-tuned bert-base, available in both cased and uncased versions, Parameters: 110M\n\n\nIntended uses & limitations\n---------------------------",
"#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.",
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.\n\n\nTraining data\n-------------\n\n\nThis model was fine-tuned on English version of the standard CoNLL-2003 Named Entity Recognition dataset.\n\n\nThe training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:",
"### CoNLL-2003 English Dataset Statistics\n\n\nThis dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.",
"#### # of training examples per entity type",
"#### # of articles/sentences/tokens per dataset\n\n\n\nTraining procedure\n------------------\n\n\nThis model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original BERT paper which trained & evaluated the model on CoNLL-2003 NER task.\n\n\nEval results\n------------\n\n\nmetric: f1, dev: 95.1, test: 91.3\nmetric: precision, dev: 95.0, test: 90.7\nmetric: recall, dev: 95.3, test: 91.9\n\n\nThe test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results here.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #bert #token-classification #en #dataset-conll2003 #arxiv-1810.04805 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Available NER models\n\n\nModel Name: distilbert-NER (NEW!), Description: Fine-tuned DistilBERT - a smaller, faster, lighter version of BERT, Parameters: 66M\nModel Name: bert-large-NER, Description: Fine-tuned bert-large-cased - larger model with slightly better performance, Parameters: 340M\nModel Name: bert-base-NER-(uncased), Description: Fine-tuned bert-base, available in both cased and uncased versions, Parameters: 110M\n\n\nIntended uses & limitations\n---------------------------",
"#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.",
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.\n\n\nTraining data\n-------------\n\n\nThis model was fine-tuned on English version of the standard CoNLL-2003 Named Entity Recognition dataset.\n\n\nThe training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:",
"### CoNLL-2003 English Dataset Statistics\n\n\nThis dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.",
"#### # of training examples per entity type",
"#### # of articles/sentences/tokens per dataset\n\n\n\nTraining procedure\n------------------\n\n\nThis model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original BERT paper which trained & evaluated the model on CoNLL-2003 NER task.\n\n\nEval results\n------------\n\n\nmetric: f1, dev: 95.1, test: 91.3\nmetric: precision, dev: 95.0, test: 90.7\nmetric: recall, dev: 95.3, test: 91.9\n\n\nThe test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results here.",
"### BibTeX entry and citation info"
] |
token-classification
|
transformers
|
# bert-large-NER
## Model description
**bert-large-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).
Specifically, this model is a *bert-large-cased* model that was fine-tuned on the English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
If you'd like to use a smaller BERT model fine-tuned on the same dataset, a [**bert-base-NER**](https://huggingface.co/dslim/bert-base-NER/) version is also available.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dslim/bert-large-NER")
model = AutoModelForTokenClassification.from_pretrained("dslim/bert-large-NER")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
## Training data
This model was fine-tuned on English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MIS | Miscellaneous entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location
### CoNLL-2003 English Dataset Statistics
This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.
#### # of training examples per entity type
Dataset|LOC|MISC|ORG|PER
-|-|-|-|-
Train|7140|3438|6321|6600
Dev|1837|922|1341|1842
Test|1668|702|1661|1617
#### # of articles/sentences/tokens per dataset
Dataset |Articles |Sentences |Tokens
-|-|-|-
Train |946 |14,987 |203,621
Dev |216 |3,466 |51,362
Test |231 |3,684 |46,435
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805) which trained & evaluated the model on CoNLL-2003 NER task.
## Eval results
metric|dev|test
-|-|-
f1 |95.7 |91.7
precision |95.3 |91.2
recall |96.1 |92.3
The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results [here](https://github.com/google-research/bert/issues/223).
### BibTeX entry and citation info
```
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and
De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
}
```
|
{"language": "en", "license": "mit", "datasets": ["conll2003"], "model-index": [{"name": "dslim/bert-large-NER", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "config": "conll2003", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9031688753722759, "name": "Accuracy", "verified": true}, {"type": "precision", "value": 0.920025068328604, "name": "Precision", "verified": true}, {"type": "recall", "value": 0.9193688678588825, "name": "Recall", "verified": true}, {"type": "f1", "value": 0.9196968510445761, "name": "F1", "verified": true}, {"type": "loss", "value": 0.5085050463676453, "name": "loss", "verified": true}]}]}]}
|
dslim/bert-large-NER
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"bert",
"token-classification",
"en",
"dataset:conll2003",
"arxiv:1810.04805",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1810.04805"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #onnx #safetensors #bert #token-classification #en #dataset-conll2003 #arxiv-1810.04805 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
|
bert-large-NER
==============
Model description
-----------------
bert-large-NER is a fine-tuned BERT model that is ready to use for Named Entity Recognition and achieves state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).
Specifically, this model is a *bert-large-cased* model that was fine-tuned on the English version of the standard CoNLL-2003 Named Entity Recognition dataset.
If you'd like to use a smaller BERT model fine-tuned on the same dataset, a bert-base-NER version is also available.
Intended uses & limitations
---------------------------
#### How to use
You can use this model with Transformers *pipeline* for NER.
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
Training data
-------------
This model was fine-tuned on English version of the standard CoNLL-2003 Named Entity Recognition dataset.
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
### CoNLL-2003 English Dataset Statistics
This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.
#### # of training examples per entity type
#### # of articles/sentences/tokens per dataset
Training procedure
------------------
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original BERT paper which trained & evaluated the model on CoNLL-2003 NER task.
Eval results
------------
metric: f1, dev: 95.7, test: 91.7
metric: precision, dev: 95.3, test: 91.2
metric: recall, dev: 96.1, test: 92.3
The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results here.
### BibTeX entry and citation info
|
[
"#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.",
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.\n\n\nTraining data\n-------------\n\n\nThis model was fine-tuned on English version of the standard CoNLL-2003 Named Entity Recognition dataset.\n\n\nThe training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:",
"### CoNLL-2003 English Dataset Statistics\n\n\nThis dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.",
"#### # of training examples per entity type",
"#### # of articles/sentences/tokens per dataset\n\n\n\nTraining procedure\n------------------\n\n\nThis model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original BERT paper which trained & evaluated the model on CoNLL-2003 NER task.\n\n\nEval results\n------------\n\n\nmetric: f1, dev: 95.7, test: 91.7\nmetric: precision, dev: 95.3, test: 91.2\nmetric: recall, dev: 96.1, test: 92.3\n\n\nThe test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results here.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #bert #token-classification #en #dataset-conll2003 #arxiv-1810.04805 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"#### How to use\n\n\nYou can use this model with Transformers *pipeline* for NER.",
"#### Limitations and bias\n\n\nThis model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.\n\n\nTraining data\n-------------\n\n\nThis model was fine-tuned on English version of the standard CoNLL-2003 Named Entity Recognition dataset.\n\n\nThe training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:",
"### CoNLL-2003 English Dataset Statistics\n\n\nThis dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.",
"#### # of training examples per entity type",
"#### # of articles/sentences/tokens per dataset\n\n\n\nTraining procedure\n------------------\n\n\nThis model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the original BERT paper which trained & evaluated the model on CoNLL-2003 NER task.\n\n\nEval results\n------------\n\n\nmetric: f1, dev: 95.7, test: 91.7\nmetric: precision, dev: 95.3, test: 91.2\nmetric: recall, dev: 96.1, test: 92.3\n\n\nThe test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results here.",
"### BibTeX entry and citation info"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 36839110
- CO2 Emissions (in grams): 123.79523392848652
## Validation Metrics
- Loss: 0.17188367247581482
- Accuracy: 0.9714953271028037
- Precision: 0.9917948717948718
- Recall: 0.9480392156862745
- AUC: 0.9947452731092438
- F1: 0.9694235588972432
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/dtam/autonlp-covid-fake-news-36839110
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dtam/autonlp-covid-fake-news-36839110", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dtam/autonlp-covid-fake-news-36839110", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["dtam/autonlp-data-covid-fake-news"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 123.79523392848652}
|
dtam/autonlp-covid-fake-news-36839110
| null |
[
"transformers",
"pytorch",
"albert",
"text-classification",
"autonlp",
"unk",
"dataset:dtam/autonlp-data-covid-fake-news",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #albert #text-classification #autonlp #unk #dataset-dtam/autonlp-data-covid-fake-news #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 36839110
- CO2 Emissions (in grams): 123.79523392848652
## Validation Metrics
- Loss: 0.17188367247581482
- Accuracy: 0.9714953271028037
- Precision: 0.9917948717948718
- Recall: 0.9480392156862745
- AUC: 0.9947452731092438
- F1: 0.9694235588972432
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 36839110\n- CO2 Emissions (in grams): 123.79523392848652",
"## Validation Metrics\n\n- Loss: 0.17188367247581482\n- Accuracy: 0.9714953271028037\n- Precision: 0.9917948717948718\n- Recall: 0.9480392156862745\n- AUC: 0.9947452731092438\n- F1: 0.9694235588972432",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #albert #text-classification #autonlp #unk #dataset-dtam/autonlp-data-covid-fake-news #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 36839110\n- CO2 Emissions (in grams): 123.79523392848652",
"## Validation Metrics\n\n- Loss: 0.17188367247581482\n- Accuracy: 0.9714953271028037\n- Precision: 0.9917948717948718\n- Recall: 0.9480392156862745\n- AUC: 0.9947452731092438\n- F1: 0.9694235588972432",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification
|
transformers
|
# RoBERTa base finetuned for Spanish irony detection
## Model description
Model to perform irony detection in Spanish. This is a finetuned version of the [RoBERTa-base-bne model](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the [IroSvA](https://www.autoritas.net/IroSvA2019/) corpus. Only the Spanish from Spain variant was used in the training process. It comprises 2,400 tweets labeled as ironic/non-ironic.
|
{"language": ["es"], "tags": ["irony", "sarcasm", "spanish"], "widget": [{"text": "\u00a1C\u00f3mo disfruto pele\u00e1ndome con los Transformers!", "example_title": "Ironic"}, {"text": "Madrid es la capital de Espa\u00f1a", "example_title": "Non ironic"}]}
|
dtomas/roberta-base-bne-irony
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"irony",
"sarcasm",
"spanish",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"es"
] |
TAGS
#transformers #pytorch #roberta #text-classification #irony #sarcasm #spanish #es #autotrain_compatible #endpoints_compatible #region-us
|
# RoBERTa base finetuned for Spanish irony detection
## Model description
Model to perform irony detection in Spanish. This is a finetuned version of the RoBERTa-base-bne model on the IroSvA corpus. Only the Spanish from Spain variant was used in the training process. It comprises 2,400 tweets labeled as ironic/non-ironic.
|
[
"# RoBERTa base finetuned for Spanish irony detection",
"## Model description\n\nModel to perform irony detection in Spanish. This is a finetuned version of the RoBERTa-base-bne model on the IroSvA corpus. Only the Spanish from Spain variant was used in the training process. It comprises 2,400 tweets labeled as ironic/non-ironic."
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #irony #sarcasm #spanish #es #autotrain_compatible #endpoints_compatible #region-us \n",
"# RoBERTa base finetuned for Spanish irony detection",
"## Model description\n\nModel to perform irony detection in Spanish. This is a finetuned version of the RoBERTa-base-bne model on the IroSvA corpus. Only the Spanish from Spain variant was used in the training process. It comprises 2,400 tweets labeled as ironic/non-ironic."
] |
fill-mask
|
transformers
|
<h1>BERT for Vietnamese Law</h1>
Apply for Task 1: Legal Document Retrieval on <a href="https://www.jaist.ac.jp/is/labs/nguyen-lab/home/alqac-2021/">ALQAC 2021</a> dataset
The model achieved 0.80 on the leaderboard(1st place score is 0.88).
We use <a href="https://huggingface.co/NlpHUST/vibert4news-base-cased">vibert4news</a> as based model and fine-tune on our own Vietnamese law dataset.
We use word sentencepiece, use basic bert tokenization and same config with bert base with lowercase = False.
|
{}
|
ductuan024/AimeLaw
| null |
[
"transformers",
"pytorch",
"ibert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #ibert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
<h1>BERT for Vietnamese Law</h1>
Apply for Task 1: Legal Document Retrieval on <a href="URL 2021</a> dataset
The model achieved 0.80 on the leaderboard(1st place score is 0.88).
We use <a href="URL as based model and fine-tune on our own Vietnamese law dataset.
We use word sentencepiece, use basic bert tokenization and same config with bert base with lowercase = False.
|
[] |
[
"TAGS\n#transformers #pytorch #ibert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# RDBotv1 DialoGPT Model
|
{"tags": ["conversational"]}
|
dukeme/DialoGPT-small-RDBotv1
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# RDBotv1 DialoGPT Model
|
[
"# RDBotv1 DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# RDBotv1 DialoGPT Model"
] |
fill-mask
|
transformers
|
# bert-base-romanian-cased-v1
The BERT **base**, **cased** model for Romanian, trained on a 15GB corpus, version 
### How to use
```python
from transformers import AutoTokenizer, AutoModel
import torch
# load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("dumitrescustefan/bert-base-romanian-cased-v1")
model = AutoModel.from_pretrained("dumitrescustefan/bert-base-romanian-cased-v1")
# tokenize a sentence and run through the model
input_ids = torch.tensor(tokenizer.encode("Acesta este un test.", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids)
# get encoding
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
```
Remember to always sanitize your text! Replace ``s`` and ``t`` cedilla-letters to comma-letters with :
```
text = text.replace("ţ", "ț").replace("ş", "ș").replace("Ţ", "Ț").replace("Ş", "Ș")
```
because the model was **NOT** trained on cedilla ``s`` and ``t``s. If you don't, you will have decreased performance due to ``<UNK>``s and increased number of tokens per word.
### Evaluation
Evaluation is performed on Universal Dependencies [Romanian RRT](https://universaldependencies.org/treebanks/ro_rrt/index.html) UPOS, XPOS and LAS, and on a NER task based on [RONEC](https://github.com/dumitrescustefan/ronec). Details, as well as more in-depth tests not shown here, are given in the dedicated [evaluation page](https://github.com/dumitrescustefan/Romanian-Transformers/tree/master/evaluation/README.md).
The baseline is the [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) model ``bert-base-multilingual-(un)cased``, as at the time of writing it was the only available BERT model that works on Romanian.
| Model | UPOS | XPOS | NER | LAS |
|--------------------------------|:-----:|:------:|:-----:|:-----:|
| bert-base-multilingual-cased | 97.87 | 96.16 | 84.13 | 88.04 |
| bert-base-romanian-cased-v1 | **98.00** | **96.46** | **85.88** | **89.69** |
### Corpus
The model is trained on the following corpora (stats in the table below are after cleaning):
| Corpus | Lines(M) | Words(M) | Chars(B) | Size(GB) |
|-----------|:--------:|:--------:|:--------:|:--------:|
| OPUS | 55.05 | 635.04 | 4.045 | 3.8 |
| OSCAR | 33.56 | 1725.82 | 11.411 | 11 |
| Wikipedia | 1.54 | 60.47 | 0.411 | 0.4 |
| **Total** | **90.15** | **2421.33** | **15.867** | **15.2** |
### Citation
If you use this model in a research paper, I'd kindly ask you to cite the following paper:
```
Stefan Dumitrescu, Andrei-Marius Avram, and Sampo Pyysalo. 2020. The birth of Romanian BERT. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4324–4328, Online. Association for Computational Linguistics.
```
or, in bibtex:
```
@inproceedings{dumitrescu-etal-2020-birth,
title = "The birth of {R}omanian {BERT}",
author = "Dumitrescu, Stefan and
Avram, Andrei-Marius and
Pyysalo, Sampo",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.387",
doi = "10.18653/v1/2020.findings-emnlp.387",
pages = "4324--4328",
}
```
#### Acknowledgements
- We'd like to thank [Sampo Pyysalo](https://github.com/spyysalo) from TurkuNLP for helping us out with the compute needed to pretrain the v1.0 BERT models. He's awesome!
|
{"language": "ro", "license": "mit", "tags": ["bert", "fill-mask"]}
|
dumitrescustefan/bert-base-romanian-cased-v1
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"ro",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ro"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #ro #license-mit #endpoints_compatible #has_space #region-us
|
bert-base-romanian-cased-v1
===========================
The BERT base, cased model for Romanian, trained on a 15GB corpus, version !v1.0
### How to use
Remember to always sanitize your text! Replace ''s'' and ''t'' cedilla-letters to comma-letters with :
because the model was NOT trained on cedilla ''s'' and ''t''s. If you don't, you will have decreased performance due to ''''s and increased number of tokens per word.
### Evaluation
Evaluation is performed on Universal Dependencies Romanian RRT UPOS, XPOS and LAS, and on a NER task based on RONEC. Details, as well as more in-depth tests not shown here, are given in the dedicated evaluation page.
The baseline is the Multilingual BERT model ''bert-base-multilingual-(un)cased'', as at the time of writing it was the only available BERT model that works on Romanian.
### Corpus
The model is trained on the following corpora (stats in the table below are after cleaning):
If you use this model in a research paper, I'd kindly ask you to cite the following paper:
or, in bibtex:
#### Acknowledgements
* We'd like to thank Sampo Pyysalo from TurkuNLP for helping us out with the compute needed to pretrain the v1.0 BERT models. He's awesome!
|
[
"### How to use\n\n\nRemember to always sanitize your text! Replace ''s'' and ''t'' cedilla-letters to comma-letters with :\n\n\nbecause the model was NOT trained on cedilla ''s'' and ''t''s. If you don't, you will have decreased performance due to ''''s and increased number of tokens per word.",
"### Evaluation\n\n\nEvaluation is performed on Universal Dependencies Romanian RRT UPOS, XPOS and LAS, and on a NER task based on RONEC. Details, as well as more in-depth tests not shown here, are given in the dedicated evaluation page.\n\n\nThe baseline is the Multilingual BERT model ''bert-base-multilingual-(un)cased'', as at the time of writing it was the only available BERT model that works on Romanian.",
"### Corpus\n\n\nThe model is trained on the following corpora (stats in the table below are after cleaning):\n\n\n\nIf you use this model in a research paper, I'd kindly ask you to cite the following paper:\n\n\nor, in bibtex:",
"#### Acknowledgements\n\n\n* We'd like to thank Sampo Pyysalo from TurkuNLP for helping us out with the compute needed to pretrain the v1.0 BERT models. He's awesome!"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #ro #license-mit #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nRemember to always sanitize your text! Replace ''s'' and ''t'' cedilla-letters to comma-letters with :\n\n\nbecause the model was NOT trained on cedilla ''s'' and ''t''s. If you don't, you will have decreased performance due to ''''s and increased number of tokens per word.",
"### Evaluation\n\n\nEvaluation is performed on Universal Dependencies Romanian RRT UPOS, XPOS and LAS, and on a NER task based on RONEC. Details, as well as more in-depth tests not shown here, are given in the dedicated evaluation page.\n\n\nThe baseline is the Multilingual BERT model ''bert-base-multilingual-(un)cased'', as at the time of writing it was the only available BERT model that works on Romanian.",
"### Corpus\n\n\nThe model is trained on the following corpora (stats in the table below are after cleaning):\n\n\n\nIf you use this model in a research paper, I'd kindly ask you to cite the following paper:\n\n\nor, in bibtex:",
"#### Acknowledgements\n\n\n* We'd like to thank Sampo Pyysalo from TurkuNLP for helping us out with the compute needed to pretrain the v1.0 BERT models. He's awesome!"
] |
token-classification
|
transformers
|
# bert-base-romanian-ner
Updated: 21.01.2022
## Model description
**bert-base-romanian-ner** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize **15** types of entities: persons, geo-political entities, locations, organizations, languages, national_religious_political entities, datetime, period, quantity, money, numeric, ordinal, facilities, works of art and events.
Specifically, this model is a [bert-base-romanian-cased-v1](https://huggingface.co/dumitrescustefan/bert-base-romanian-cased-v1) model that was fine-tuned on [RONEC version 2.0](https://github.com/dumitrescustefan/ronec), which holds 12330 sentences with over 0.5M tokens, to a total of 80.283 distinctly annotated entities. RONECv2 is a BIO2 annotated corpus, meaning this model will generate "B-" and "I-" style labels for entities.
The model will generate labels according to the following list: ['O', 'B-PERSON', 'I-PERSON', 'B-ORG', 'I-ORG', 'B-GPE', 'I-GPE', 'B-LOC', 'I-LOC', 'B-NAT_REL_POL', 'I-NAT_REL_POL', 'B-EVENT', 'I-EVENT', 'B-LANGUAGE', 'I-LANGUAGE', 'B-WORK_OF_ART', 'I-WORK_OF_ART', 'B-DATETIME', 'I-DATETIME', 'B-PERIOD', 'I-PERIOD', 'B-MONEY', 'I-MONEY', 'B-QUANTITY', 'I-QUANTITY', 'B-NUMERIC', 'I-NUMERIC', 'B-ORDINAL', 'I-ORDINAL', 'B-FACILITY', 'I-FACILITY']. Label 'O' represents Other.
### How to use
There are 2 ways to use this model:
#### Directly in Transformers:
You can use this model with Transformers *pipeline* for NER; you will have to handle word tokenization in multiple subtokens cases with different labels.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dumitrescustefan/bert-base-romanian-ner")
model = AutoModelForTokenClassification.from_pretrained("dumitrescustefan/bert-base-romanian-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Alex cumpără un bilet pentru trenul 3118 în direcția Cluj cu plecare la ora 13:00."
ner_results = nlp(example)
print(ner_results)
```
#### Use in a Python package
``pip install roner``
Easy, takes care of word-token alignment, long sequences, etc. See details at [https://github.com/dumitrescustefan/roner](https://github.com/dumitrescustefan/roner)
#### Don't forget!
Remember to always sanitize your text! Replace _s_ and _t_ cedilla-letters to comma-letters **before processing your text** with these models, with :
```
text = text.replace("ţ", "ț").replace("ş", "ș").replace("Ţ", "Ț").replace("Ş", "Ș")
```
## NER evaluation results
```
'test/ent_type': 0.9276865720748901,
'test/exact': 0.9118986129760742,
'test/partial': 0.9356381297111511,
'test/strict': 0.8921924233436584
```
## Corpus details
The corpus has the following classes and distribution in the train/valid/test splits:
| Classes | Total | Train | | Valid | | Test | |
|------------- |:------: |:------: |:-------: |:------: |:-------: |:------: |:-------: |
| | # | # | % | # | % | # | % |
| PERSON | **26130** | 19167 | 73.35 | 2733 | 10.46 | 4230 | 16.19 |
| GPE | **11103** | 8193 | 73.79 | 1182 | 10.65 | 1728 | 15.56 |
| LOC | **2467** | 1824 | 73.94 | 270 | 10.94 | 373 | 15.12 |
| ORG | **7880** | 5688 | 72.18 | 880 | 11.17 | 1312 | 16.65 |
| LANGUAGE | **467** | 342 | 73.23 | 52 | 11.13 | 73 | 15.63 |
| NAT_REL_POL | **4970** | 3673 | 73.90 | 516 | 10.38 | 781 | 15.71 |
| DATETIME | **9614** | 6960 | 72.39 | 1029 | 10.7 | 1625 | 16.9 |
| PERIOD | **1188** | 862 | 72.56 | 129 | 10.86 | 197 | 16.58 |
| QUANTITY | **1588** | 1161 | 73.11 | 181 | 11.4 | 246 | 15.49 |
| MONEY | **1424** | 1041 | 73.10 | 159 | 11.17 | 224 | 15.73 |
| NUMERIC | **7735** | 5734 | 74.13 | 814 | 10.52 | 1187 | 15.35 |
| ORDINAL | **1893** | 1377 | 72.74 | 212 | 11.2 | 304 | 16.06 |
| FACILITY | **1126** | 840 | 74.6 | 113 | 10.04 | 173 | 15.36 |
| WORK_OF_ART | **1596** | 1157 | 72.49 | 176 | 11.03 | 263 | 16.48 |
| EVENT | **1102** | 826 | 74.95 | 107 | 9.71 | 169 | 15.34 |
### BibTeX entry and citation info
Please consider citing the following [paper](https://arxiv.org/abs/1909.01247) as a thank you to the authors of the RONEC, even if it describes v1 of the corpus and you are using a model trained on v2:
```
Dumitrescu, Stefan Daniel, and Andrei-Marius Avram. "Introducing RONEC--the Romanian Named Entity Corpus." arXiv preprint arXiv:1909.01247 (2019).
```
or in .bibtex format:
```
@article{dumitrescu2019introducing,
title={Introducing RONEC--the Romanian Named Entity Corpus},
author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius},
journal={arXiv preprint arXiv:1909.01247},
year={2019}
}
```
|
{"language": "ro", "license": "mit", "datasets": ["ronec"]}
|
dumitrescustefan/bert-base-romanian-ner
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"ro",
"dataset:ronec",
"arxiv:1909.01247",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1909.01247"
] |
[
"ro"
] |
TAGS
#transformers #pytorch #bert #token-classification #ro #dataset-ronec #arxiv-1909.01247 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
bert-base-romanian-ner
======================
Updated: 21.01.2022
Model description
-----------------
bert-base-romanian-ner is a fine-tuned BERT model that is ready to use for Named Entity Recognition and achieves state-of-the-art performance for the NER task. It has been trained to recognize 15 types of entities: persons, geo-political entities, locations, organizations, languages, national\_religious\_political entities, datetime, period, quantity, money, numeric, ordinal, facilities, works of art and events.
Specifically, this model is a bert-base-romanian-cased-v1 model that was fine-tuned on RONEC version 2.0, which holds 12330 sentences with over 0.5M tokens, to a total of 80.283 distinctly annotated entities. RONECv2 is a BIO2 annotated corpus, meaning this model will generate "B-" and "I-" style labels for entities.
The model will generate labels according to the following list: ['O', 'B-PERSON', 'I-PERSON', 'B-ORG', 'I-ORG', 'B-GPE', 'I-GPE', 'B-LOC', 'I-LOC', 'B-NAT\_REL\_POL', 'I-NAT\_REL\_POL', 'B-EVENT', 'I-EVENT', 'B-LANGUAGE', 'I-LANGUAGE', 'B-WORK\_OF\_ART', 'I-WORK\_OF\_ART', 'B-DATETIME', 'I-DATETIME', 'B-PERIOD', 'I-PERIOD', 'B-MONEY', 'I-MONEY', 'B-QUANTITY', 'I-QUANTITY', 'B-NUMERIC', 'I-NUMERIC', 'B-ORDINAL', 'I-ORDINAL', 'B-FACILITY', 'I-FACILITY']. Label 'O' represents Other.
### How to use
There are 2 ways to use this model:
#### Directly in Transformers:
You can use this model with Transformers *pipeline* for NER; you will have to handle word tokenization in multiple subtokens cases with different labels.
#### Use in a Python package
''pip install roner''
Easy, takes care of word-token alignment, long sequences, etc. See details at URL
#### Don't forget!
Remember to always sanitize your text! Replace *s* and *t* cedilla-letters to comma-letters before processing your text with these models, with :
NER evaluation results
----------------------
Corpus details
--------------
The corpus has the following classes and distribution in the train/valid/test splits:
### BibTeX entry and citation info
Please consider citing the following paper as a thank you to the authors of the RONEC, even if it describes v1 of the corpus and you are using a model trained on v2:
or in .bibtex format:
|
[
"### How to use\n\n\nThere are 2 ways to use this model:",
"#### Directly in Transformers:\n\n\nYou can use this model with Transformers *pipeline* for NER; you will have to handle word tokenization in multiple subtokens cases with different labels.",
"#### Use in a Python package\n\n\n''pip install roner''\n\n\nEasy, takes care of word-token alignment, long sequences, etc. See details at URL",
"#### Don't forget!\n\n\nRemember to always sanitize your text! Replace *s* and *t* cedilla-letters to comma-letters before processing your text with these models, with :\n\n\nNER evaluation results\n----------------------\n\n\nCorpus details\n--------------\n\n\nThe corpus has the following classes and distribution in the train/valid/test splits:",
"### BibTeX entry and citation info\n\n\nPlease consider citing the following paper as a thank you to the authors of the RONEC, even if it describes v1 of the corpus and you are using a model trained on v2:\n\n\nor in .bibtex format:"
] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #ro #dataset-ronec #arxiv-1909.01247 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nThere are 2 ways to use this model:",
"#### Directly in Transformers:\n\n\nYou can use this model with Transformers *pipeline* for NER; you will have to handle word tokenization in multiple subtokens cases with different labels.",
"#### Use in a Python package\n\n\n''pip install roner''\n\n\nEasy, takes care of word-token alignment, long sequences, etc. See details at URL",
"#### Don't forget!\n\n\nRemember to always sanitize your text! Replace *s* and *t* cedilla-letters to comma-letters before processing your text with these models, with :\n\n\nNER evaluation results\n----------------------\n\n\nCorpus details\n--------------\n\n\nThe corpus has the following classes and distribution in the train/valid/test splits:",
"### BibTeX entry and citation info\n\n\nPlease consider citing the following paper as a thank you to the authors of the RONEC, even if it describes v1 of the corpus and you are using a model trained on v2:\n\n\nor in .bibtex format:"
] |
fill-mask
|
transformers
|
# bert-base-romanian-uncased-v1
The BERT **base**, **uncased** model for Romanian, trained on a 15GB corpus, version 
### How to use
```python
from transformers import AutoTokenizer, AutoModel
import torch
# load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("dumitrescustefan/bert-base-romanian-uncased-v1", do_lower_case=True)
model = AutoModel.from_pretrained("dumitrescustefan/bert-base-romanian-uncased-v1")
# tokenize a sentence and run through the model
input_ids = torch.tensor(tokenizer.encode("Acesta este un test.", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids)
# get encoding
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
```
Remember to always sanitize your text! Replace ``s`` and ``t`` cedilla-letters to comma-letters with :
```
text = text.replace("ţ", "ț").replace("ş", "ș").replace("Ţ", "Ț").replace("Ş", "Ș")
```
because the model was **NOT** trained on cedilla ``s`` and ``t``s. If you don't, you will have decreased performance due to ``<UNK>``s and increased number of tokens per word.
### Evaluation
Evaluation is performed on Universal Dependencies [Romanian RRT](https://universaldependencies.org/treebanks/ro_rrt/index.html) UPOS, XPOS and LAS, and on a NER task based on [RONEC](https://github.com/dumitrescustefan/ronec). Details, as well as more in-depth tests not shown here, are given in the dedicated [evaluation page](https://github.com/dumitrescustefan/Romanian-Transformers/tree/master/evaluation/README.md).
The baseline is the [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) model ``bert-base-multilingual-(un)cased``, as at the time of writing it was the only available BERT model that works on Romanian.
| Model | UPOS | XPOS | NER | LAS |
|--------------------------------|:-----:|:------:|:-----:|:-----:|
| bert-base-multilingual-uncased | 97.65 | 95.72 | 83.91 | 87.65 |
| bert-base-romanian-uncased-v1 | **98.18** | **96.84** | **85.26** | **89.61** |
### Corpus
The model is trained on the following corpora (stats in the table below are after cleaning):
| Corpus | Lines(M) | Words(M) | Chars(B) | Size(GB) |
|-----------|:--------:|:--------:|:--------:|:--------:|
| OPUS | 55.05 | 635.04 | 4.045 | 3.8 |
| OSCAR | 33.56 | 1725.82 | 11.411 | 11 |
| Wikipedia | 1.54 | 60.47 | 0.411 | 0.4 |
| **Total** | **90.15** | **2421.33** | **15.867** | **15.2** |
### Citation
If you use this model in a research paper, I'd kindly ask you to cite the following paper:
```
Stefan Dumitrescu, Andrei-Marius Avram, and Sampo Pyysalo. 2020. The birth of Romanian BERT. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4324–4328, Online. Association for Computational Linguistics.
```
or, in bibtex:
```
@inproceedings{dumitrescu-etal-2020-birth,
title = "The birth of {R}omanian {BERT}",
author = "Dumitrescu, Stefan and
Avram, Andrei-Marius and
Pyysalo, Sampo",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.387",
doi = "10.18653/v1/2020.findings-emnlp.387",
pages = "4324--4328",
}
```
#### Acknowledgements
- We'd like to thank [Sampo Pyysalo](https://github.com/spyysalo) from TurkuNLP for helping us out with the compute needed to pretrain the v1.0 BERT models. He's awesome!
|
{"language": "ro", "license": "mit", "tags": ["bert", "fill-mask"]}
|
dumitrescustefan/bert-base-romanian-uncased-v1
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"ro",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ro"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #ro #license-mit #endpoints_compatible #region-us
|
bert-base-romanian-uncased-v1
=============================
The BERT base, uncased model for Romanian, trained on a 15GB corpus, version !v1.0
### How to use
Remember to always sanitize your text! Replace ''s'' and ''t'' cedilla-letters to comma-letters with :
because the model was NOT trained on cedilla ''s'' and ''t''s. If you don't, you will have decreased performance due to ''''s and increased number of tokens per word.
### Evaluation
Evaluation is performed on Universal Dependencies Romanian RRT UPOS, XPOS and LAS, and on a NER task based on RONEC. Details, as well as more in-depth tests not shown here, are given in the dedicated evaluation page.
The baseline is the Multilingual BERT model ''bert-base-multilingual-(un)cased'', as at the time of writing it was the only available BERT model that works on Romanian.
### Corpus
The model is trained on the following corpora (stats in the table below are after cleaning):
If you use this model in a research paper, I'd kindly ask you to cite the following paper:
or, in bibtex:
#### Acknowledgements
* We'd like to thank Sampo Pyysalo from TurkuNLP for helping us out with the compute needed to pretrain the v1.0 BERT models. He's awesome!
|
[
"### How to use\n\n\nRemember to always sanitize your text! Replace ''s'' and ''t'' cedilla-letters to comma-letters with :\n\n\nbecause the model was NOT trained on cedilla ''s'' and ''t''s. If you don't, you will have decreased performance due to ''''s and increased number of tokens per word.",
"### Evaluation\n\n\nEvaluation is performed on Universal Dependencies Romanian RRT UPOS, XPOS and LAS, and on a NER task based on RONEC. Details, as well as more in-depth tests not shown here, are given in the dedicated evaluation page.\n\n\nThe baseline is the Multilingual BERT model ''bert-base-multilingual-(un)cased'', as at the time of writing it was the only available BERT model that works on Romanian.",
"### Corpus\n\n\nThe model is trained on the following corpora (stats in the table below are after cleaning):\n\n\n\nIf you use this model in a research paper, I'd kindly ask you to cite the following paper:\n\n\nor, in bibtex:",
"#### Acknowledgements\n\n\n* We'd like to thank Sampo Pyysalo from TurkuNLP for helping us out with the compute needed to pretrain the v1.0 BERT models. He's awesome!"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #ro #license-mit #endpoints_compatible #region-us \n",
"### How to use\n\n\nRemember to always sanitize your text! Replace ''s'' and ''t'' cedilla-letters to comma-letters with :\n\n\nbecause the model was NOT trained on cedilla ''s'' and ''t''s. If you don't, you will have decreased performance due to ''''s and increased number of tokens per word.",
"### Evaluation\n\n\nEvaluation is performed on Universal Dependencies Romanian RRT UPOS, XPOS and LAS, and on a NER task based on RONEC. Details, as well as more in-depth tests not shown here, are given in the dedicated evaluation page.\n\n\nThe baseline is the Multilingual BERT model ''bert-base-multilingual-(un)cased'', as at the time of writing it was the only available BERT model that works on Romanian.",
"### Corpus\n\n\nThe model is trained on the following corpora (stats in the table below are after cleaning):\n\n\n\nIf you use this model in a research paper, I'd kindly ask you to cite the following paper:\n\n\nor, in bibtex:",
"#### Acknowledgements\n\n\n* We'd like to thank Sampo Pyysalo from TurkuNLP for helping us out with the compute needed to pretrain the v1.0 BERT models. He's awesome!"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Lithuanian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Lithuanian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("dundar/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("dundar/wav2vec2-large-xlsr-53-lithuanian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Lithuanian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lt", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("dundar/wav2vec2-large-xlsr-53-lithuanian")
model = Wav2Vec2ForCTC.from_pretrained("dundar/wav2vec2-large-xlsr-53-lithuanian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 35.87 %
## Training
The Common Voice datasets `except the test` set were used for training.
The script used for training can be found [here](https://github.com/ebdundar/)
|
{"language": "lt", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Lithuanian by Enes Burak Dundar", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice lt", "type": "common_voice", "args": "lt"}, "metrics": [{"type": "wer", "value": 35.87, "name": "Test WER"}]}]}]}
|
dundar/wav2vec2-large-xlsr-53-lithuanian
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"lt",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"lt"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #lt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Lithuanian
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Lithuanian using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Lithuanian test data of Common Voice.
Test Result: 35.87 %
## Training
The Common Voice datasets 'except the test' set were used for training.
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-53-Lithuanian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Lithuanian using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Lithuanian test data of Common Voice.\n\n\n\n\nTest Result: 35.87 %",
"## Training\n\nThe Common Voice datasets 'except the test' set were used for training.\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #lt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Lithuanian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Lithuanian using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Lithuanian test data of Common Voice.\n\n\n\n\nTest Result: 35.87 %",
"## Training\n\nThe Common Voice datasets 'except the test' set were used for training.\n\nThe script used for training can be found here"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Turkish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("dundar/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("dundar/wav2vec2-large-xlsr-53-turkish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "tr", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("dundar/wav2vec2-large-xlsr-53-turkish")
model = Wav2Vec2ForCTC.from_pretrained("dundar/wav2vec2-large-xlsr-53-turkish")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\'\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 24.86 %
## Training
The Common Voice datasets `except the test` set were used for training.
The script used for training can be found [here](https://github.com/ebdundar/)
|
{"language": "tr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Turkish by Enes Burak Dundar", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tr", "type": "common_voice", "args": "tr"}, "metrics": [{"type": "wer", "value": 24.86, "name": "Test WER"}]}]}]}
|
dundar/wav2vec2-large-xlsr-53-turkish
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Turkish
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Turkish using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
Test Result: 24.86 %
## Training
The Common Voice datasets 'except the test' set were used for training.
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-53-Turkish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Turkish using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\n\nTest Result: 24.86 %",
"## Training\n\nThe Common Voice datasets 'except the test' set were used for training.\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Turkish\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Turkish using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\n\n\n\nTest Result: 24.86 %",
"## Training\n\nThe Common Voice datasets 'except the test' set were used for training.\n\nThe script used for training can be found here"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indic-transformers-te-distilbert
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2940
- Precision: 0.5657
- Recall: 0.6486
- F1: 0.6043
- Accuracy: 0.9049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 125 | 0.3629 | 0.4855 | 0.5287 | 0.5062 | 0.8826 |
| No log | 2.0 | 250 | 0.3032 | 0.5446 | 0.6303 | 0.5843 | 0.9002 |
| No log | 3.0 | 375 | 0.2940 | 0.5657 | 0.6486 | 0.6043 | 0.9049 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "indic-transformers-te-distilbert", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "te"}, "metrics": [{"type": "precision", "value": 0.5657225853304285, "name": "Precision"}, {"type": "recall", "value": 0.6486261448792673, "name": "Recall"}, {"type": "f1", "value": 0.604344453064391, "name": "F1"}, {"type": "accuracy", "value": 0.9049186160277506, "name": "Accuracy"}]}]}]}
|
durgaamma2005/indic-transformers-te-distilbert
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-wikiann #model-index #autotrain_compatible #endpoints_compatible #region-us
|
indic-transformers-te-distilbert
================================
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2940
* Precision: 0.5657
* Recall: 0.6486
* F1: 0.6043
* Accuracy: 0.9049
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-wikiann #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
# Bertinho-gl-base-cased
A pre-trained BERT model for Galician (12layers, cased). Trained on Wikipedia
|
{"language": "gl", "widget": [{"text": "As filloas son un [MASK] t\u00edpico do entroido en Galicia "}]}
|
dvilares/bertinho-gl-base-cased
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"gl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"gl"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #gl #autotrain_compatible #endpoints_compatible #region-us
|
# Bertinho-gl-base-cased
A pre-trained BERT model for Galician (12layers, cased). Trained on Wikipedia
|
[
"# Bertinho-gl-base-cased\n\nA pre-trained BERT model for Galician (12layers, cased). Trained on Wikipedia"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #gl #autotrain_compatible #endpoints_compatible #region-us \n",
"# Bertinho-gl-base-cased\n\nA pre-trained BERT model for Galician (12layers, cased). Trained on Wikipedia"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.