pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation
|
transformers
|
# German GPT-2 model
In this repository we release (yet another) GPT-2 model, that was trained on various texts for German.
The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉
**Note**: The model was initially released under an anonymous alias (`anonymous-german-nlp/german-gpt2`) so we now "de-anonymize" it.
More details about GPT-2 can be found in the great [Hugging Face](https://huggingface.co/transformers/model_doc/gpt2.html) documentation.
## German GPT-2 fine-tuned on Faust I and II
We fine-tuned our German GPT-2 model on "Faust I and II" from Johann Wolfgang Goethe. These texts can be obtained from [Deutsches Textarchiv (DTA)](http://www.deutschestextarchiv.de/book/show/goethe_faust01_1808). We use the "normalized" version of both texts (to avoid out-of-vocabulary problems with e.g. "ſ")
Fine-Tuning was done for 100 epochs, using a batch size of 4 with half precision on a RTX 3090. Total time was around 12 minutes (it is really fast!).
We also open source this fine-tuned model. Text can be generated with:
```python
from transformers import pipeline
pipe = pipeline('text-generation', model="dbmdz/german-gpt2-faust",
tokenizer="dbmdz/german-gpt2-faust")
text = pipe("Schon um die Liebe", max_length=100)[0]["generated_text"]
print(text)
```
and could output:
```
Schon um die Liebe bitte ich, Herr! Wer mag sich die dreifach Ermächtigen?
Sei mir ein Held!
Und daß die Stunde kommt spreche ich nicht aus.
Faust (schaudernd).
Den schönen Boten finde' ich verwirrend;
```
# License
All models are licensed under [MIT](LICENSE).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/stefan-it/german-gpt/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "de", "license": "mit", "widget": [{"text": "Schon um die Liebe"}]}
|
dbmdz/german-gpt2-faust
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #jax #safetensors #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# German GPT-2 model
In this repository we release (yet another) GPT-2 model, that was trained on various texts for German.
The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model
Note: The model was initially released under an anonymous alias ('anonymous-german-nlp/german-gpt2') so we now "de-anonymize" it.
More details about GPT-2 can be found in the great Hugging Face documentation.
## German GPT-2 fine-tuned on Faust I and II
We fine-tuned our German GPT-2 model on "Faust I and II" from Johann Wolfgang Goethe. These texts can be obtained from Deutsches Textarchiv (DTA). We use the "normalized" version of both texts (to avoid out-of-vocabulary problems with e.g. "ſ")
Fine-Tuning was done for 100 epochs, using a batch size of 4 with half precision on a RTX 3090. Total time was around 12 minutes (it is really fast!).
We also open source this fine-tuned model. Text can be generated with:
and could output:
# License
All models are licensed under MIT.
# Huggingface model hub
All models are available on the Huggingface model hub.
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
here
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[
"# German GPT-2 model\n\nIn this repository we release (yet another) GPT-2 model, that was trained on various texts for German.\n\nThe model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or \"dangerous\" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model \n\nNote: The model was initially released under an anonymous alias ('anonymous-german-nlp/german-gpt2') so we now \"de-anonymize\" it.\n\nMore details about GPT-2 can be found in the great Hugging Face documentation.",
"## German GPT-2 fine-tuned on Faust I and II\n\nWe fine-tuned our German GPT-2 model on \"Faust I and II\" from Johann Wolfgang Goethe. These texts can be obtained from Deutsches Textarchiv (DTA). We use the \"normalized\" version of both texts (to avoid out-of-vocabulary problems with e.g. \"ſ\")\n\nFine-Tuning was done for 100 epochs, using a batch size of 4 with half precision on a RTX 3090. Total time was around 12 minutes (it is really fast!).\n\nWe also open source this fine-tuned model. Text can be generated with:\n\n\n\nand could output:",
"# License\n\nAll models are licensed under MIT.",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our BERT models just open an issue\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# German GPT-2 model\n\nIn this repository we release (yet another) GPT-2 model, that was trained on various texts for German.\n\nThe model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or \"dangerous\" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model \n\nNote: The model was initially released under an anonymous alias ('anonymous-german-nlp/german-gpt2') so we now \"de-anonymize\" it.\n\nMore details about GPT-2 can be found in the great Hugging Face documentation.",
"## German GPT-2 fine-tuned on Faust I and II\n\nWe fine-tuned our German GPT-2 model on \"Faust I and II\" from Johann Wolfgang Goethe. These texts can be obtained from Deutsches Textarchiv (DTA). We use the \"normalized\" version of both texts (to avoid out-of-vocabulary problems with e.g. \"ſ\")\n\nFine-Tuning was done for 100 epochs, using a batch size of 4 with half precision on a RTX 3090. Total time was around 12 minutes (it is really fast!).\n\nWe also open source this fine-tuned model. Text can be generated with:\n\n\n\nand could output:",
"# License\n\nAll models are licensed under MIT.",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our BERT models just open an issue\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
text-generation
|
transformers
|
# German GPT-2 model
In this repository we release (yet another) GPT-2 model, that was trained on various texts for German.
The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉
**Note**: The model was initially released under an anonymous alias (`anonymous-german-nlp/german-gpt2`) so we now "de-anonymize" it.
More details about GPT-2 can be found in the great [Hugging Face](https://huggingface.co/transformers/model_doc/gpt2.html) documentation.
# Changelog
16.08.2021: Public release of re-trained version of our German GPT-2 model with better results.
15.11.2020: Initial release. Please use the tag `v1.0` for [this older version](https://huggingface.co/dbmdz/german-gpt2/tree/v1.0).
# Training corpora
We use pretty much the same corpora as used for training the DBMDZ BERT model, that can be found in [this repository](https://github.com/dbmdz/berts).
Thanks to the awesome Hugging Face team, it is possible to create byte-level BPE with their awesome [Tokenizers](https://github.com/huggingface/tokenizers) library.
With the previously mentioned awesome Tokenizers library we created a 50K byte-level BPE vocab based on the training corpora.
After creating the vocab, we could train the GPT-2 for German on a v3-8 TPU over the complete training corpus for 20 epochs. All hyperparameters
can be found in the official JAX/FLAX documentation [here](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/README.md)
from Transformers.
# Using the model
The model itself can be used in this way:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbmdz/german-gpt2")
model = AutoModelWithLMHead.from_pretrained("dbmdz/german-gpt2")
```
However, text generation is a bit more interesting, so here's an example that shows how to use the great Transformers *Pipelines* for generating text:
```python
from transformers import pipeline
pipe = pipeline('text-generation', model="dbmdz/german-gpt2",
tokenizer="dbmdz/german-gpt2")
text = pipe("Der Sinn des Lebens ist es", max_length=100)[0]["generated_text"]
print(text)
```
This could output this beautiful text:
```
Der Sinn des Lebens ist es, im Geist zu verweilen, aber nicht in der Welt zu sein, sondern ganz im Geist zu leben.
Die Menschen beginnen, sich nicht nach der Natur und nach der Welt zu richten, sondern nach der Seele,'
```
# License
All models are licensed under [MIT](LICENSE).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/stefan-it/german-gpt/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "de", "license": "mit", "widget": [{"text": "Heute ist sehr sch\u00f6nes Wetter in"}]}
|
dbmdz/german-gpt2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #jax #onnx #safetensors #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# German GPT-2 model
In this repository we release (yet another) GPT-2 model, that was trained on various texts for German.
The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model
Note: The model was initially released under an anonymous alias ('anonymous-german-nlp/german-gpt2') so we now "de-anonymize" it.
More details about GPT-2 can be found in the great Hugging Face documentation.
# Changelog
16.08.2021: Public release of re-trained version of our German GPT-2 model with better results.
15.11.2020: Initial release. Please use the tag 'v1.0' for this older version.
# Training corpora
We use pretty much the same corpora as used for training the DBMDZ BERT model, that can be found in this repository.
Thanks to the awesome Hugging Face team, it is possible to create byte-level BPE with their awesome Tokenizers library.
With the previously mentioned awesome Tokenizers library we created a 50K byte-level BPE vocab based on the training corpora.
After creating the vocab, we could train the GPT-2 for German on a v3-8 TPU over the complete training corpus for 20 epochs. All hyperparameters
can be found in the official JAX/FLAX documentation here
from Transformers.
# Using the model
The model itself can be used in this way:
However, text generation is a bit more interesting, so here's an example that shows how to use the great Transformers *Pipelines* for generating text:
This could output this beautiful text:
# License
All models are licensed under MIT.
# Huggingface model hub
All models are available on the Huggingface model hub.
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
here
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[
"# German GPT-2 model\n\nIn this repository we release (yet another) GPT-2 model, that was trained on various texts for German.\n\nThe model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or \"dangerous\" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model \n\nNote: The model was initially released under an anonymous alias ('anonymous-german-nlp/german-gpt2') so we now \"de-anonymize\" it.\n\nMore details about GPT-2 can be found in the great Hugging Face documentation.",
"# Changelog\n\n16.08.2021: Public release of re-trained version of our German GPT-2 model with better results.\n\n15.11.2020: Initial release. Please use the tag 'v1.0' for this older version.",
"# Training corpora\n\nWe use pretty much the same corpora as used for training the DBMDZ BERT model, that can be found in this repository.\n\nThanks to the awesome Hugging Face team, it is possible to create byte-level BPE with their awesome Tokenizers library.\n\nWith the previously mentioned awesome Tokenizers library we created a 50K byte-level BPE vocab based on the training corpora.\n\nAfter creating the vocab, we could train the GPT-2 for German on a v3-8 TPU over the complete training corpus for 20 epochs. All hyperparameters\ncan be found in the official JAX/FLAX documentation here\nfrom Transformers.",
"# Using the model\n\nThe model itself can be used in this way:\n\n\n\nHowever, text generation is a bit more interesting, so here's an example that shows how to use the great Transformers *Pipelines* for generating text:\n\n\n\nThis could output this beautiful text:",
"# License\n\nAll models are licensed under MIT.",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our BERT models just open an issue\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #onnx #safetensors #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# German GPT-2 model\n\nIn this repository we release (yet another) GPT-2 model, that was trained on various texts for German.\n\nThe model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or \"dangerous\" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model \n\nNote: The model was initially released under an anonymous alias ('anonymous-german-nlp/german-gpt2') so we now \"de-anonymize\" it.\n\nMore details about GPT-2 can be found in the great Hugging Face documentation.",
"# Changelog\n\n16.08.2021: Public release of re-trained version of our German GPT-2 model with better results.\n\n15.11.2020: Initial release. Please use the tag 'v1.0' for this older version.",
"# Training corpora\n\nWe use pretty much the same corpora as used for training the DBMDZ BERT model, that can be found in this repository.\n\nThanks to the awesome Hugging Face team, it is possible to create byte-level BPE with their awesome Tokenizers library.\n\nWith the previously mentioned awesome Tokenizers library we created a 50K byte-level BPE vocab based on the training corpora.\n\nAfter creating the vocab, we could train the GPT-2 for German on a v3-8 TPU over the complete training corpus for 20 epochs. All hyperparameters\ncan be found in the official JAX/FLAX documentation here\nfrom Transformers.",
"# Using the model\n\nThe model itself can be used in this way:\n\n\n\nHowever, text generation is a bit more interesting, so here's an example that shows how to use the great Transformers *Pipelines* for generating text:\n\n\n\nThis could output this beautiful text:",
"# License\n\nAll models are licensed under MIT.",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our BERT models just open an issue\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
text2text-generation
|
transformers
|
# T5 Base Model for Named Entity Recognition (NER, CoNLL-2003)
In this repository, we open source a T5 Base model, that was fine-tuned on the official CoNLL-2003 NER dataset.
We use the great [TANL library](https://github.com/amazon-research/tanl) from Amazon for fine-tuning the model.
The exact approach of fine-tuning is presented in the "TANL: Structured Prediction as Translation between Augmented Natural Languages"
paper from Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang and Stefano Soatto.
# Fine-Tuning
We use the same hyper-parameter settings as used in the official implementation with one minor change. Instead of using 8 V100 GPUs, we train the model
on one V100 GPU and used gradient accumulation. The slighly modified configuration file (`config.ini`) then looks like:
```ini
[conll03]
datasets = conll03
model_name_or_path = t5-base
num_train_epochs = 10
max_seq_length = 256
max_seq_length_eval = 512
per_device_train_batch_size = 4
per_device_eval_batch_size = 4
do_train = True
do_eval = True
do_predict = True
gradient_accumulation_steps = 8
```
It took around 2 hours to fine-tune that model on the 14,041 training sentences of CoNLL-2003 dataset.
# Evaluation
On the development set, the following evaluation results could be achieved:
```json
{
"entity_precision": 0.9536446086664427,
"entity_recall": 0.9555705149781218,
"entity_f1": 0.9546065904505716,
"entity_precision_no_type": 0.9773261672824992,
"entity_recall_no_type": 0.9792998990238977,
"entity_f1_no_type": 0.9783120376597176
}
```
The evaluation results on the test set looks like:
```json
{
"entity_precision": 0.912182296231376,
"entity_recall": 0.9213881019830028,
"entity_f1": 0.9167620893155995,
"entity_precision_no_type": 0.953900087642419,
"entity_recall_no_type": 0.9635269121813032,
"entity_f1_no_type": 0.9586893332158901
}
```
To summarize: On the development set, 95.46% F1-Score and 91.68% on test set were achieved with this model. The paper reported a F1-Score of 91.7%.
# License
The models is licensed under [MIT](https://choosealicense.com/licenses/mit/).
# Acknowledgments
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "en", "license": "mit", "datasets": ["conll2003"], "widget": [{"text": "My name is Clara Clever and I live in Berkeley , California ."}]}
|
dbmdz/t5-base-conll03-english
| null |
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"dataset:conll2003",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #en #dataset-conll2003 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# T5 Base Model for Named Entity Recognition (NER, CoNLL-2003)
In this repository, we open source a T5 Base model, that was fine-tuned on the official CoNLL-2003 NER dataset.
We use the great TANL library from Amazon for fine-tuning the model.
The exact approach of fine-tuning is presented in the "TANL: Structured Prediction as Translation between Augmented Natural Languages"
paper from Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang and Stefano Soatto.
# Fine-Tuning
We use the same hyper-parameter settings as used in the official implementation with one minor change. Instead of using 8 V100 GPUs, we train the model
on one V100 GPU and used gradient accumulation. The slighly modified configuration file ('URL') then looks like:
It took around 2 hours to fine-tune that model on the 14,041 training sentences of CoNLL-2003 dataset.
# Evaluation
On the development set, the following evaluation results could be achieved:
The evaluation results on the test set looks like:
To summarize: On the development set, 95.46% F1-Score and 91.68% on test set were achieved with this model. The paper reported a F1-Score of 91.7%.
# License
The models is licensed under MIT.
# Acknowledgments
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[
"# T5 Base Model for Named Entity Recognition (NER, CoNLL-2003)\n\nIn this repository, we open source a T5 Base model, that was fine-tuned on the official CoNLL-2003 NER dataset.\n\nWe use the great TANL library from Amazon for fine-tuning the model.\n\nThe exact approach of fine-tuning is presented in the \"TANL: Structured Prediction as Translation between Augmented Natural Languages\"\npaper from Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang and Stefano Soatto.",
"# Fine-Tuning\n\nWe use the same hyper-parameter settings as used in the official implementation with one minor change. Instead of using 8 V100 GPUs, we train the model\non one V100 GPU and used gradient accumulation. The slighly modified configuration file ('URL') then looks like:\n\n\n\nIt took around 2 hours to fine-tune that model on the 14,041 training sentences of CoNLL-2003 dataset.",
"# Evaluation\n\nOn the development set, the following evaluation results could be achieved:\n\n\n\nThe evaluation results on the test set looks like:\n\n\n\nTo summarize: On the development set, 95.46% F1-Score and 91.68% on test set were achieved with this model. The paper reported a F1-Score of 91.7%.",
"# License\n\nThe models is licensed under MIT.",
"# Acknowledgments\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #en #dataset-conll2003 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# T5 Base Model for Named Entity Recognition (NER, CoNLL-2003)\n\nIn this repository, we open source a T5 Base model, that was fine-tuned on the official CoNLL-2003 NER dataset.\n\nWe use the great TANL library from Amazon for fine-tuning the model.\n\nThe exact approach of fine-tuning is presented in the \"TANL: Structured Prediction as Translation between Augmented Natural Languages\"\npaper from Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, Rishita Anubhai, Cicero Nogueira dos Santos, Bing Xiang and Stefano Soatto.",
"# Fine-Tuning\n\nWe use the same hyper-parameter settings as used in the official implementation with one minor change. Instead of using 8 V100 GPUs, we train the model\non one V100 GPU and used gradient accumulation. The slighly modified configuration file ('URL') then looks like:\n\n\n\nIt took around 2 hours to fine-tune that model on the 14,041 training sentences of CoNLL-2003 dataset.",
"# Evaluation\n\nOn the development set, the following evaluation results could be achieved:\n\n\n\nThe evaluation results on the test set looks like:\n\n\n\nTo summarize: On the development set, 95.46% F1-Score and 91.68% on test set were achieved with this model. The paper reported a F1-Score of 91.7%.",
"# License\n\nThe models is licensed under MIT.",
"# Acknowledgments\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
fill-mask
|
transformers
|
Masked Language Model trained on the articles and talks of Noam Chomsky.
|
{}
|
dbragdon/noam-masked-lm
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
Masked Language Model trained on the articles and talks of Noam Chomsky.
|
[] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
Language model fine-tuned on the articles and speeches of Noam Chomsky.
|
{}
|
dbragdon/noamlm
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Language model fine-tuned on the articles and speeches of Noam Chomsky.
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2781
- Precision: 0.8121
- Recall: 0.8302
- F1: 0.8210
- Accuracy: 0.9204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3504 | 1.0 | 1250 | 0.2922 | 0.7930 | 0.8075 | 0.8002 | 0.9115 |
| 0.2353 | 2.0 | 2500 | 0.2711 | 0.8127 | 0.8264 | 0.8195 | 0.9196 |
| 0.1745 | 3.0 | 3750 | 0.2781 | 0.8121 | 0.8302 | 0.8210 | 0.9204 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "en"}, "metrics": [{"type": "precision", "value": 0.8120642485217545, "name": "Precision"}, {"type": "recall", "value": 0.830235495804385, "name": "Recall"}, {"type": "f1", "value": 0.8210493441599, "name": "F1"}, {"type": "accuracy", "value": 0.9203828724683252, "name": "Accuracy"}]}]}]}
|
dbsamu/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2781
* Precision: 0.8121
* Recall: 0.8302
* F1: 0.8210
* Accuracy: 0.9204
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-discriminator-finetuned-ner
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3685
- Precision: 0.7331
- Recall: 0.7543
- F1: 0.7435
- Accuracy: 0.8883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5465 | 1.0 | 1250 | 0.4158 | 0.6932 | 0.7201 | 0.7064 | 0.8735 |
| 0.4037 | 2.0 | 2500 | 0.3817 | 0.7191 | 0.7470 | 0.7328 | 0.8828 |
| 0.3606 | 3.0 | 3750 | 0.3685 | 0.7331 | 0.7543 | 0.7435 | 0.8883 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "electra-small-discriminator-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "en"}, "metrics": [{"type": "precision", "value": 0.7330965535385425, "name": "Precision"}, {"type": "recall", "value": 0.7542632861138681, "name": "Recall"}, {"type": "f1", "value": 0.7435293071244329, "name": "F1"}, {"type": "accuracy", "value": 0.8883011190233978, "name": "Accuracy"}]}]}]}
|
dbsamu/electra-small-discriminator-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #electra #token-classification #generated_from_trainer #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
electra-small-discriminator-finetuned-ner
=========================================
This model is a fine-tuned version of google/electra-small-discriminator on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3685
* Precision: 0.7331
* Recall: 0.7543
* F1: 0.7435
* Accuracy: 0.8883
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #electra #token-classification #generated_from_trainer #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
# BETO: Spanish BERT
BETO is a [BERT model](https://github.com/google-research/bert) trained on a [big Spanish corpus](https://github.com/josecannete/spanish-corpora). BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) as well as other (not BERT-based) models.
## Download
| | | | |
|-|:--------:|:-----:|:----:|
|BETO uncased|[tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/pytorch_weights.tar.gz) | [vocab](./config/uncased_2M/vocab.txt), [config](./config/uncased_2M/config.json) |
|BETO cased| [tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/pytorch_weights.tar.gz) | [vocab](./config/cased_2M/vocab.txt), [config](./config/cased_2M/config.json) |
All models use a vocabulary of about 31k BPE subwords constructed using SentencePiece and were trained for 2M steps.
## Benchmarks
The following table shows some BETO results in the Spanish version of every task.
We compare BETO (cased and uncased) with the Best Multilingual BERT results that
we found in the literature (as of October 2019).
The table also shows some alternative methods for the same tasks (not necessarily BERT-based methods).
References for all methods can be found [here](#references).
|Task | BETO-cased | BETO-uncased | Best Multilingual BERT | Other results |
|-------|--------------:|--------------:|--------------------------:|-------------------------------:|
|[POS](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-1827) | **98.97** | 98.44 | 97.10 [2] | 98.91 [6], 96.71 [3] |
|[NER-C](https://www.kaggle.com/nltkdata/conll-corpora) | [**88.43**](https://github.com/gchaperon/beto-benchmarks/blob/master/conll2002/dev_results_beto-cased_conll2002.txt) | 82.67 | 87.38 [2] | 87.18 [3] |
|[MLDoc](https://github.com/facebookresearch/MLDoc) | [95.60](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-cased_mldoc.txt) | [**96.12**](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-uncased_mldoc.txt) | 95.70 [2] | 88.75 [4] |
|[PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx) | 89.05 | 89.55 | 90.70 [8] |
|[XNLI](https://github.com/facebookresearch/XNLI) | **82.01** | 80.15 | 78.50 [2] | 80.80 [5], 77.80 [1], 73.15 [4]|
## Example of use
For further details on how to use BETO you can visit the [🤗Huggingface Transformers library](https://github.com/huggingface/transformers), starting by the [Quickstart section](https://huggingface.co/transformers/quickstart.html).
BETO models can be accessed simply as [`'dccuchile/bert-base-spanish-wwm-cased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) and [`'dccuchile/bert-base-spanish-wwm-uncased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) by using the Transformers library.
An example on how to download and use the models in this page can be found in [this colab notebook](https://colab.research.google.com/drive/1pYOYsCU59GBOwztkWCw5PTsqBiJbRy4S?usp=sharing).
(We will soon add a more detailed step-by-step tutorial in Spanish for newcommers 😉)
## Acknowledgments
We thank [Adereso](https://www.adere.so/) for kindly providing support for traininig BETO-uncased, and the [Millennium Institute for Foundational Research on Data](https://imfd.cl/en/)
that provided support for training BETO-cased. Also thanks to Google for helping us with the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) program.
## Citation
[Spanish Pre-Trained BERT Model and Evaluation Data](https://users.dcc.uchile.cl/~jperez/papers/pml4dc2020.pdf)
To cite this resource in a publication please use the following:
```
@inproceedings{CaneteCFP2020,
title={Spanish Pre-Trained BERT Model and Evaluation Data},
author={Cañete, José and Chaperon, Gabriel and Fuentes, Rodrigo and Ho, Jou-Hui and Kang, Hojin and Pérez, Jorge},
booktitle={PML4DC at ICLR 2020},
year={2020}
}
```
## License Disclaimer
The license CC BY 4.0 best describes our intentions for our work. However we are not sure that all the datasets used to train BETO have licenses compatible with CC BY 4.0 (specially for commercial use). Please use at your own discretion and verify that the licenses of the original text resources match your needs.
## References
* [1] [Original Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md)
* [2] [Multilingual BERT on "Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT"](https://arxiv.org/pdf/1904.09077.pdf)
* [3] [Multilingual BERT on "How Multilingual is Multilingual BERT?"](https://arxiv.org/pdf/1906.01502.pdf)
* [4] [LASER](https://arxiv.org/abs/1812.10464)
* [5] [XLM (MLM+TLM)](https://arxiv.org/pdf/1901.07291.pdf)
* [6] [UDPipe on "75 Languages, 1 Model: Parsing Universal Dependencies Universally"](https://arxiv.org/pdf/1904.02099.pdf)
* [7] [Multilingual BERT on "Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation"](https://arxiv.org/pdf/1906.01569.pdf)
* [8] [Multilingual BERT on "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification"](https://arxiv.org/abs/1908.11828)
|
{"language": ["es"], "tags": ["masked-lm"]}
|
dccuchile/bert-base-spanish-wwm-cased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"masked-lm",
"es",
"arxiv:1904.09077",
"arxiv:1906.01502",
"arxiv:1812.10464",
"arxiv:1901.07291",
"arxiv:1904.02099",
"arxiv:1906.01569",
"arxiv:1908.11828",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.09077",
"1906.01502",
"1812.10464",
"1901.07291",
"1904.02099",
"1906.01569",
"1908.11828"
] |
[
"es"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #masked-lm #es #arxiv-1904.09077 #arxiv-1906.01502 #arxiv-1812.10464 #arxiv-1901.07291 #arxiv-1904.02099 #arxiv-1906.01569 #arxiv-1908.11828 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
BETO: Spanish BERT
==================
BETO is a BERT model trained on a big Spanish corpus. BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with Multilingual BERT as well as other (not BERT-based) models.
Download
--------
All models use a vocabulary of about 31k BPE subwords constructed using SentencePiece and were trained for 2M steps.
Benchmarks
----------
The following table shows some BETO results in the Spanish version of every task.
We compare BETO (cased and uncased) with the Best Multilingual BERT results that
we found in the literature (as of October 2019).
The table also shows some alternative methods for the same tasks (not necessarily BERT-based methods).
References for all methods can be found here.
Example of use
--------------
For further details on how to use BETO you can visit the Huggingface Transformers library, starting by the Quickstart section.
BETO models can be accessed simply as ''dccuchile/bert-base-spanish-wwm-cased'' and ''dccuchile/bert-base-spanish-wwm-uncased'' by using the Transformers library.
An example on how to download and use the models in this page can be found in this colab notebook.
(We will soon add a more detailed step-by-step tutorial in Spanish for newcommers )
Acknowledgments
---------------
We thank Adereso for kindly providing support for traininig BETO-uncased, and the Millennium Institute for Foundational Research on Data
that provided support for training BETO-cased. Also thanks to Google for helping us with the TensorFlow Research Cloud program.
Spanish Pre-Trained BERT Model and Evaluation Data
To cite this resource in a publication please use the following:
License Disclaimer
------------------
The license CC BY 4.0 best describes our intentions for our work. However we are not sure that all the datasets used to train BETO have licenses compatible with CC BY 4.0 (specially for commercial use). Please use at your own discretion and verify that the licenses of the original text resources match your needs.
References
----------
* [1] Original Multilingual BERT
* [2] Multilingual BERT on "Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT"
* [3] Multilingual BERT on "How Multilingual is Multilingual BERT?"
* [4] LASER
* [5] XLM (MLM+TLM)
* [6] UDPipe on "75 Languages, 1 Model: Parsing Universal Dependencies Universally"
* [7] Multilingual BERT on "Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation"
* [8] Multilingual BERT on "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification"
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #masked-lm #es #arxiv-1904.09077 #arxiv-1906.01502 #arxiv-1812.10464 #arxiv-1901.07291 #arxiv-1904.02099 #arxiv-1906.01569 #arxiv-1908.11828 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
# BETO: Spanish BERT
BETO is a [BERT model](https://github.com/google-research/bert) trained on a [big Spanish corpus](https://github.com/josecannete/spanish-corpora). BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with [Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) as well as other (not BERT-based) models.
## Download
| | | | |
|-|:--------:|:-----:|:----:|
|BETO uncased|[tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/uncased_2M/pytorch_weights.tar.gz) | [vocab](./config/uncased_2M/vocab.txt), [config](./config/uncased_2M/config.json) |
|BETO cased| [tensorflow_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/tensorflow_weights.tar.gz) | [pytorch_weights](https://users.dcc.uchile.cl/~jperez/beto/cased_2M/pytorch_weights.tar.gz) | [vocab](./config/cased_2M/vocab.txt), [config](./config/cased_2M/config.json) |
All models use a vocabulary of about 31k BPE subwords constructed using SentencePiece and were trained for 2M steps.
## Benchmarks
The following table shows some BETO results in the Spanish version of every task.
We compare BETO (cased and uncased) with the Best Multilingual BERT results that
we found in the literature (as of October 2019).
The table also shows some alternative methods for the same tasks (not necessarily BERT-based methods).
References for all methods can be found [here](#references).
|Task | BETO-cased | BETO-uncased | Best Multilingual BERT | Other results |
|-------|--------------:|--------------:|--------------------------:|-------------------------------:|
|[POS](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-1827) | **98.97** | 98.44 | 97.10 [2] | 98.91 [6], 96.71 [3] |
|[NER-C](https://www.kaggle.com/nltkdata/conll-corpora) | [**88.43**](https://github.com/gchaperon/beto-benchmarks/blob/master/conll2002/dev_results_beto-cased_conll2002.txt) | 82.67 | 87.38 [2] | 87.18 [3] |
|[MLDoc](https://github.com/facebookresearch/MLDoc) | [95.60](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-cased_mldoc.txt) | [**96.12**](https://github.com/gchaperon/beto-benchmarks/blob/master/MLDoc/dev_results_beto-uncased_mldoc.txt) | 95.70 [2] | 88.75 [4] |
|[PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx) | 89.05 | 89.55 | 90.70 [8] |
|[XNLI](https://github.com/facebookresearch/XNLI) | **82.01** | 80.15 | 78.50 [2] | 80.80 [5], 77.80 [1], 73.15 [4]|
## Example of use
For further details on how to use BETO you can visit the [🤗Huggingface Transformers library](https://github.com/huggingface/transformers), starting by the [Quickstart section](https://huggingface.co/transformers/quickstart.html).
BETO models can be accessed simply as [`'dccuchile/bert-base-spanish-wwm-cased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) and [`'dccuchile/bert-base-spanish-wwm-uncased'`](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) by using the Transformers library.
An example on how to download and use the models in this page can be found in [this colab notebook](https://colab.research.google.com/drive/1pYOYsCU59GBOwztkWCw5PTsqBiJbRy4S?usp=sharing).
(We will soon add a more detailed step-by-step tutorial in Spanish for newcommers 😉)
## Acknowledgments
We thank [Adereso](https://www.adere.so/) for kindly providing support for traininig BETO-uncased, and the [Millennium Institute for Foundational Research on Data](https://imfd.cl/en/)
that provided support for training BETO-cased. Also thanks to Google for helping us with the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) program.
## Citation
[Spanish Pre-Trained BERT Model and Evaluation Data](https://users.dcc.uchile.cl/~jperez/papers/pml4dc2020.pdf)
To cite this resource in a publication please use the following:
```
@inproceedings{CaneteCFP2020,
title={Spanish Pre-Trained BERT Model and Evaluation Data},
author={Cañete, José and Chaperon, Gabriel and Fuentes, Rodrigo and Ho, Jou-Hui and Kang, Hojin and Pérez, Jorge},
booktitle={PML4DC at ICLR 2020},
year={2020}
}
```
## License Disclaimer
The license CC BY 4.0 best describes our intentions for our work. However we are not sure that all the datasets used to train BETO have licenses compatible with CC BY 4.0 (specially for commercial use). Please use at your own discretion and verify that the licenses of the original text resources match your needs.
## References
* [1] [Original Multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md)
* [2] [Multilingual BERT on "Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT"](https://arxiv.org/pdf/1904.09077.pdf)
* [3] [Multilingual BERT on "How Multilingual is Multilingual BERT?"](https://arxiv.org/pdf/1906.01502.pdf)
* [4] [LASER](https://arxiv.org/abs/1812.10464)
* [5] [XLM (MLM+TLM)](https://arxiv.org/pdf/1901.07291.pdf)
* [6] [UDPipe on "75 Languages, 1 Model: Parsing Universal Dependencies Universally"](https://arxiv.org/pdf/1904.02099.pdf)
* [7] [Multilingual BERT on "Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation"](https://arxiv.org/pdf/1906.01569.pdf)
* [8] [Multilingual BERT on "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification"](https://arxiv.org/abs/1908.11828)
|
{"language": ["es"], "tags": ["masked-lm"]}
|
dccuchile/bert-base-spanish-wwm-uncased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"masked-lm",
"es",
"arxiv:1904.09077",
"arxiv:1906.01502",
"arxiv:1812.10464",
"arxiv:1901.07291",
"arxiv:1904.02099",
"arxiv:1906.01569",
"arxiv:1908.11828",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.09077",
"1906.01502",
"1812.10464",
"1901.07291",
"1904.02099",
"1906.01569",
"1908.11828"
] |
[
"es"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #masked-lm #es #arxiv-1904.09077 #arxiv-1906.01502 #arxiv-1812.10464 #arxiv-1901.07291 #arxiv-1904.02099 #arxiv-1906.01569 #arxiv-1908.11828 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
BETO: Spanish BERT
==================
BETO is a BERT model trained on a big Spanish corpus. BETO is of size similar to a BERT-Base and was trained with the Whole Word Masking technique. Below you find Tensorflow and Pytorch checkpoints for the uncased and cased versions, as well as some results for Spanish benchmarks comparing BETO with Multilingual BERT as well as other (not BERT-based) models.
Download
--------
All models use a vocabulary of about 31k BPE subwords constructed using SentencePiece and were trained for 2M steps.
Benchmarks
----------
The following table shows some BETO results in the Spanish version of every task.
We compare BETO (cased and uncased) with the Best Multilingual BERT results that
we found in the literature (as of October 2019).
The table also shows some alternative methods for the same tasks (not necessarily BERT-based methods).
References for all methods can be found here.
Example of use
--------------
For further details on how to use BETO you can visit the Huggingface Transformers library, starting by the Quickstart section.
BETO models can be accessed simply as ''dccuchile/bert-base-spanish-wwm-cased'' and ''dccuchile/bert-base-spanish-wwm-uncased'' by using the Transformers library.
An example on how to download and use the models in this page can be found in this colab notebook.
(We will soon add a more detailed step-by-step tutorial in Spanish for newcommers )
Acknowledgments
---------------
We thank Adereso for kindly providing support for traininig BETO-uncased, and the Millennium Institute for Foundational Research on Data
that provided support for training BETO-cased. Also thanks to Google for helping us with the TensorFlow Research Cloud program.
Spanish Pre-Trained BERT Model and Evaluation Data
To cite this resource in a publication please use the following:
License Disclaimer
------------------
The license CC BY 4.0 best describes our intentions for our work. However we are not sure that all the datasets used to train BETO have licenses compatible with CC BY 4.0 (specially for commercial use). Please use at your own discretion and verify that the licenses of the original text resources match your needs.
References
----------
* [1] Original Multilingual BERT
* [2] Multilingual BERT on "Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT"
* [3] Multilingual BERT on "How Multilingual is Multilingual BERT?"
* [4] LASER
* [5] XLM (MLM+TLM)
* [6] UDPipe on "75 Languages, 1 Model: Parsing Universal Dependencies Universally"
* [7] Multilingual BERT on "Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation"
* [8] Multilingual BERT on "PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification"
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #masked-lm #es #arxiv-1904.09077 #arxiv-1906.01502 #arxiv-1812.10464 #arxiv-1901.07291 #arxiv-1904.02099 #arxiv-1906.01569 #arxiv-1908.11828 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
null | null |
https://teespring.com/dashboard/listings/113925135/edit
|
{}
|
ddddd/EDCLasVegas
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
URL
|
[] |
[
"TAGS\n#region-us \n"
] |
sentence-similarity
|
sentence-transformers
|
# ddobokki/electra-small-nli-sts
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ddobokki/electra-small-nli-sts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ddobokki/electra-small-nli-sts')
model = AutoModel.from_pretrained('ddobokki/electra-small-nli-sts')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ddobokki/electra-small-nli-sts)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 9039 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 903,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 904,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: ElectraModel
(1): Pooling({'word_embedding_dimension': 256, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "ko"], "pipeline_tag": "sentence-similarity"}
|
ddobokki/electra-small-nli-sts
| null |
[
"sentence-transformers",
"pytorch",
"electra",
"feature-extraction",
"sentence-similarity",
"transformers",
"ko",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #electra #feature-extraction #sentence-similarity #transformers #ko #endpoints_compatible #region-us
|
# ddobokki/electra-small-nli-sts
This is a sentence-transformers model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 9039 with parameters:
Loss:
'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# ddobokki/electra-small-nli-sts\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 9039 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #electra #feature-extraction #sentence-similarity #transformers #ko #endpoints_compatible #region-us \n",
"# ddobokki/electra-small-nli-sts\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader' of length 9039 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# ddobokki/klue-roberta-small-nli-sts
한국어 Sentence Transformer 모델입니다.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
[sentence-transformers](https://www.SBERT.net) 라이브러리를 이용해 사용할 수 있습니다.
```
pip install -U sentence-transformers
```
사용법
```python
from sentence_transformers import SentenceTransformer
sentences = ["흐르는 강물을 거꾸로 거슬러 오르는", "세월이 가면 가슴이 터질 듯한"]
model = SentenceTransformer('ddobokki/klue-roberta-small-nli-sts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
transformers 라이브러리만 사용할 경우
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["흐르는 강물을 거꾸로 거슬러 오르는", "세월이 가면 가슴이 터질 듯한"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ddobokki/klue-roberta-small-nli-sts')
model = AutoModel.from_pretrained('ddobokki/klue-roberta-small-nli-sts')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Performance
- Semantic Textual Similarity test set results <br>
| Model | Cosine Pearson | Cosine Spearman | Euclidean Pearson | Euclidean Spearman | Manhattan Pearson | Manhattan Spearman | Dot Pearson | Dot Spearman |
|------------------------|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
| KoSRoBERTa<sup>small</sup> | 84.27 | 84.17 | 83.33 | 83.65 | 83.34 | 83.65 | 82.10 | 81.38 |
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "ko"], "pipeline_tag": "sentence-similarity"}
|
ddobokki/klue-roberta-small-nli-sts
| null |
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"ko",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #ko #endpoints_compatible #region-us
|
ddobokki/klue-roberta-small-nli-sts
===================================
한국어 Sentence Transformer 모델입니다.
Usage (Sentence-Transformers)
-----------------------------
sentence-transformers 라이브러리를 이용해 사용할 수 있습니다.
사용법
Usage (HuggingFace Transformers)
--------------------------------
transformers 라이브러리만 사용할 경우
Performance
-----------
* Semantic Textual Similarity test set results
Full Model Architecture
-----------------------
Citing & Authors
----------------
|
[] |
[
"TAGS\n#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #ko #endpoints_compatible #region-us \n"
] |
null |
transformers
|
## EXAMPLE
```python
import requests
import torch
from PIL import Image
from transformers import (
VisionEncoderDecoderModel,
ViTFeatureExtractor,
PreTrainedTokenizerFast,
)
# device setting
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# load feature extractor and tokenizer
encoder_model_name_or_path = "ddobokki/vision-encoder-decoder-vit-gpt2-coco-ko"
feature_extractor = ViTFeatureExtractor.from_pretrained(encoder_model_name_or_path)
tokenizer = PreTrainedTokenizerFast.from_pretrained(encoder_model_name_or_path)
# load model
model = VisionEncoderDecoderModel.from_pretrained(encoder_model_name_or_path)
model.to(device)
# inference
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
with Image.open(requests.get(url, stream=True).raw) as img:
pixel_values = feature_extractor(images=img, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values.to(device),num_beams=5)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
>> ['고양이 두마리가 담요 위에 누워 있다.']
```
|
{}
|
ddobokki/vision-encoder-decoder-vit-gpt2-coco-ko
| null |
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #vision-encoder-decoder #endpoints_compatible #region-us
|
## EXAMPLE
|
[
"## EXAMPLE"
] |
[
"TAGS\n#transformers #pytorch #vision-encoder-decoder #endpoints_compatible #region-us \n",
"## EXAMPLE"
] |
null |
speechbrain
|
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Conformer for KsponSpeech (with Transformer LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on KsponSpeech (Kr) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | eval clean CER | eval other CER | GPUs |
| :------: | :------------: | :------------: | :---------: |
| 01-23-23 | 7.33% | 7.99% | 6xA100 80GB |
## Pipeline description
This ASR system is composed of 3 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of KsponSpeech.
- Neural language model (Transformer LM) trained on the train transcriptions of KsponSpeech
- Acoustic model made of a conformer encoder and a joint decoder with CTC +
transformer. Hence, the decoding also incorporates the CTC probabilities.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
!pip install git+https://github.com/speechbrain/speechbrain.git
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in Korean)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="ddwkim/asr-conformer-transformerlm-ksponspeech", savedir="pretrained_models/asr-conformer-transformerlm-ksponspeech", run_opts={"device":"cuda"})
asr_model.transcribe_file("ddwkim/asr-conformer-transformerlm-ksponspeech/record_0_16k.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1finp9pfmGRzWHCAPNkqAH2yGH6k_BbPA?usp=sharing) on using the pretrained model
### Training
The model was trained with SpeechBrain (Commit hash: '4b3bf60').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install .
```
3. Run Training:
```bash
cd recipes/KsponSpeech/ASR/transformer
python train.py hparams/conformer_medium.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) at the subdirectories.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# Citing the model
```bibtex
@misc{returnzero,
title = {ReturnZero Conformer Korean ASR model},
author = {Dongwon Kim and Dongwoo Kim and Jeongkyu Roh},
year = {2021},
howpublished = {\url{https://huggingface.co/ddwkim/asr-conformer-transformerlm-ksponspeech}},
}
```
# Citing KsponSpeech dataset
```bibtex
@Article{app10196936,
AUTHOR = {Bang, Jeong-Uk and Yun, Seung and Kim, Seung-Hi and Choi, Mu-Yeol and Lee, Min-Kyu and Kim, Yeo-Jeong and Kim, Dong-Hyun and Park, Jun and Lee, Young-Jik and Kim, Sang-Hun},
TITLE = {KsponSpeech: Korean Spontaneous Speech Corpus for Automatic Speech Recognition},
JOURNAL = {Applied Sciences},
VOLUME = {10},
YEAR = {2020},
NUMBER = {19},
ARTICLE-NUMBER = {6936},
URL = {https://www.mdpi.com/2076-3417/10/19/6936},
ISSN = {2076-3417},
DOI = {10.3390/app10196936}
}
```
|
{"language": "kr", "license": "apache-2.0", "tags": ["ASR", "CTC", "Attention", "Conformer", "pytorch", "speechbrain"], "datasets": ["ksponspeech"], "metrics": ["wer", "cer"]}
|
ddwkim/asr-conformer-transformerlm-ksponspeech
| null |
[
"speechbrain",
"ASR",
"CTC",
"Attention",
"Conformer",
"pytorch",
"kr",
"dataset:ksponspeech",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.04624"
] |
[
"kr"
] |
TAGS
#speechbrain #ASR #CTC #Attention #Conformer #pytorch #kr #dataset-ksponspeech #arxiv-2106.04624 #license-apache-2.0 #region-us
|
Conformer for KsponSpeech (with Transformer LM)
===============================================
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on KsponSpeech (Kr) within
SpeechBrain. For a better experience, we encourage you to learn more about
SpeechBrain.
The performance of the model is the following:
Pipeline description
--------------------
This ASR system is composed of 3 different but linked blocks:
* Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of KsponSpeech.
* Neural language model (Transformer LM) trained on the train transcriptions of KsponSpeech
* Acoustic model made of a conformer encoder and a joint decoder with CTC +
transformer. Hence, the decoding also incorporates the CTC probabilities.
Install SpeechBrain
-------------------
First of all, please install SpeechBrain with the following command:
Please notice that we encourage you to read our tutorials and learn more about
SpeechBrain.
### Transcribing your own audio files (in Korean)
### Inference on GPU
To perform inference on the GPU, add 'run\_opts={"device":"cuda"}' when calling the 'from\_hparams' method.
Parallel Inference on a Batch
-----------------------------
Please, see this Colab notebook on using the pretrained model
### Training
The model was trained with SpeechBrain (Commit hash: '4b3bf60').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
2. Install it:
3. Run Training:
You can find our training results (models, logs, etc) at the subdirectories.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
About SpeechBrain
=================
* Website: URL
* Code: URL
* HuggingFace: URL
Citing SpeechBrain
==================
Please, cite SpeechBrain if you use it for your research or business.
Citing the model
================
Citing KsponSpeech dataset
==========================
|
[
"### Transcribing your own audio files (in Korean)",
"### Inference on GPU\n\n\nTo perform inference on the GPU, add 'run\\_opts={\"device\":\"cuda\"}' when calling the 'from\\_hparams' method.\n\n\nParallel Inference on a Batch\n-----------------------------\n\n\nPlease, see this Colab notebook on using the pretrained model",
"### Training\n\n\nThe model was trained with SpeechBrain (Commit hash: '4b3bf60').\nTo train it from scratch follow these steps:\n\n\n1. Clone SpeechBrain:\n2. Install it:\n3. Run Training:\n\n\nYou can find our training results (models, logs, etc) at the subdirectories.",
"### Limitations\n\n\nThe SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.\n\n\nAbout SpeechBrain\n=================\n\n\n* Website: URL\n* Code: URL\n* HuggingFace: URL\n\n\nCiting SpeechBrain\n==================\n\n\nPlease, cite SpeechBrain if you use it for your research or business.\n\n\nCiting the model\n================\n\n\nCiting KsponSpeech dataset\n=========================="
] |
[
"TAGS\n#speechbrain #ASR #CTC #Attention #Conformer #pytorch #kr #dataset-ksponspeech #arxiv-2106.04624 #license-apache-2.0 #region-us \n",
"### Transcribing your own audio files (in Korean)",
"### Inference on GPU\n\n\nTo perform inference on the GPU, add 'run\\_opts={\"device\":\"cuda\"}' when calling the 'from\\_hparams' method.\n\n\nParallel Inference on a Batch\n-----------------------------\n\n\nPlease, see this Colab notebook on using the pretrained model",
"### Training\n\n\nThe model was trained with SpeechBrain (Commit hash: '4b3bf60').\nTo train it from scratch follow these steps:\n\n\n1. Clone SpeechBrain:\n2. Install it:\n3. Run Training:\n\n\nYou can find our training results (models, logs, etc) at the subdirectories.",
"### Limitations\n\n\nThe SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.\n\n\nAbout SpeechBrain\n=================\n\n\n* Website: URL\n* Code: URL\n* HuggingFace: URL\n\n\nCiting SpeechBrain\n==================\n\n\nPlease, cite SpeechBrain if you use it for your research or business.\n\n\nCiting the model\n================\n\n\nCiting KsponSpeech dataset\n=========================="
] |
text-generation
|
transformers
|
# DialoGPT Trained on the Speech of a Game Character
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dead69/GTP-small-yoda")
model = AutoModelWithLMHead.from_pretrained("dead69/GTP-small-yoda")
# Let's chat for 4 lines
for step in range(10):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("Master YODA: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"}
|
dead69/GPT-small-yoda
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# DialoGPT Trained on the Speech of a Game Character
Chat with the model:
|
[
"# DialoGPT Trained on the Speech of a Game Character\n\n\nChat with the model:"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DialoGPT Trained on the Speech of a Game Character\n\n\nChat with the model:"
] |
text2text-generation
|
transformers
|
Pretraining Dataset: [AAAC01](https://huggingface.co/datasets/debatelab/aaac)
Demo: [DeepA2 Demo](https://huggingface.co/spaces/debatelab/deepa2-demo)
Paper: [DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models](https://arxiv.org/abs/2110.01509)
Authors: *Gregor Betz, Kyle Richardson*
## Abstract
In this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst -- a T5 model (Raffel et al. 2020) set up and trained within DeepA2 -- reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synthetic corpus for deep argument analysis, and evaluate ArgumentAnalyst on this new dataset as well as on existing data, specifically EntailmentBank (Dalvi et al. 2021). Our empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics (such as hermeneutic cycles), to explore the model's uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence.
|
{"language": ["en"], "license": "cc-by-sa-4.0", "datasets": ["debatelab/aaac"], "widget": [{"text": "reason_statements: argument_source: If Peter likes fish, Peter has been to New York. So, Peter has been to New York.", "example_title": "Premise identification"}, {"text": "argdown_reconstruction: argument_source: If Peter likes fish, Peter has been to New York. So, Peter has been to New York.", "example_title": "Argdown reconstruction"}, {"text": "premises_formalized: reason_statements: If Peter likes fish, Peter has been to New York. (ref: (1))", "example_title": "Formalization"}], "inference": {"parameters": {"max_length": 80}}}
|
DebateLabKIT/argument-analyst
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:debatelab/aaac",
"arxiv:2110.01509",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2110.01509"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #dataset-debatelab/aaac #arxiv-2110.01509 #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Pretraining Dataset: AAAC01
Demo: DeepA2 Demo
Paper: DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models
Authors: *Gregor Betz, Kyle Richardson*
## Abstract
In this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst -- a T5 model (Raffel et al. 2020) set up and trained within DeepA2 -- reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synthetic corpus for deep argument analysis, and evaluate ArgumentAnalyst on this new dataset as well as on existing data, specifically EntailmentBank (Dalvi et al. 2021). Our empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics (such as hermeneutic cycles), to explore the model's uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence.
|
[
"## Abstract\n\nIn this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst -- a T5 model (Raffel et al. 2020) set up and trained within DeepA2 -- reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synthetic corpus for deep argument analysis, and evaluate ArgumentAnalyst on this new dataset as well as on existing data, specifically EntailmentBank (Dalvi et al. 2021). Our empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics (such as hermeneutic cycles), to explore the model's uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #dataset-debatelab/aaac #arxiv-2110.01509 #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Abstract\n\nIn this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst -- a T5 model (Raffel et al. 2020) set up and trained within DeepA2 -- reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synthetic corpus for deep argument analysis, and evaluate ArgumentAnalyst on this new dataset as well as on existing data, specifically EntailmentBank (Dalvi et al. 2021). Our empirical findings vindicate the overall framework and highlight the advantages of a modular design, in particular its ability to emulate established heuristics (such as hermeneutic cycles), to explore the model's uncertainty, to cope with the plurality of correct solutions (underdetermination), and to exploit higher-order evidence."
] |
text-generation
|
transformers
|
# CRiPT Model Large (Critical Thinking Intermediarily Pretrained Transformer)
Large version of the trained model (`SYL01-2020-10-24-72K/gpt2-large-train03-72K`) presented in the paper "Critical Thinking for Language Models" (Betz, Voigt and Richardson 2020). See also:
* [blog entry](https://debatelab.github.io/journal/critical-thinking-language-models.html)
* [GitHub repo](https://github.com/debatelab/aacorpus)
* [paper](https://arxiv.org/pdf/2009.07185)
|
{"language": "en", "tags": ["gpt2"]}
|
DebateLabKIT/cript-large
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"arxiv:2009.07185",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2009.07185"
] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #en #arxiv-2009.07185 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CRiPT Model Large (Critical Thinking Intermediarily Pretrained Transformer)
Large version of the trained model ('SYL01-2020-10-24-72K/gpt2-large-train03-72K') presented in the paper "Critical Thinking for Language Models" (Betz, Voigt and Richardson 2020). See also:
* blog entry
* GitHub repo
* paper
|
[
"# CRiPT Model Large (Critical Thinking Intermediarily Pretrained Transformer)\nLarge version of the trained model ('SYL01-2020-10-24-72K/gpt2-large-train03-72K') presented in the paper \"Critical Thinking for Language Models\" (Betz, Voigt and Richardson 2020). See also:\n * blog entry\n * GitHub repo\n * paper"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #en #arxiv-2009.07185 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CRiPT Model Large (Critical Thinking Intermediarily Pretrained Transformer)\nLarge version of the trained model ('SYL01-2020-10-24-72K/gpt2-large-train03-72K') presented in the paper \"Critical Thinking for Language Models\" (Betz, Voigt and Richardson 2020). See also:\n * blog entry\n * GitHub repo\n * paper"
] |
text-generation
|
transformers
|
# CRiPT Model Medium (Critical Thinking Intermediarily Pretrained Transformer)
Medium version of the trained model (`SYL01-2020-10-24-72K/gpt2-medium-train03-72K`) presented in the paper "Critical Thinking for Language Models" (Betz, Voigt and Richardson 2020). See also:
* [blog entry](https://debatelab.github.io/journal/critical-thinking-language-models.html)
* [GitHub repo](https://github.com/debatelab/aacorpus)
* [paper](https://arxiv.org/pdf/2009.07185)
|
{"language": "en", "tags": ["gpt2"]}
|
DebateLabKIT/cript-medium
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"arxiv:2009.07185",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2009.07185"
] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #en #arxiv-2009.07185 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CRiPT Model Medium (Critical Thinking Intermediarily Pretrained Transformer)
Medium version of the trained model ('SYL01-2020-10-24-72K/gpt2-medium-train03-72K') presented in the paper "Critical Thinking for Language Models" (Betz, Voigt and Richardson 2020). See also:
* blog entry
* GitHub repo
* paper
|
[
"# CRiPT Model Medium (Critical Thinking Intermediarily Pretrained Transformer)\nMedium version of the trained model ('SYL01-2020-10-24-72K/gpt2-medium-train03-72K') presented in the paper \"Critical Thinking for Language Models\" (Betz, Voigt and Richardson 2020). See also:\n * blog entry\n * GitHub repo\n * paper"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #en #arxiv-2009.07185 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CRiPT Model Medium (Critical Thinking Intermediarily Pretrained Transformer)\nMedium version of the trained model ('SYL01-2020-10-24-72K/gpt2-medium-train03-72K') presented in the paper \"Critical Thinking for Language Models\" (Betz, Voigt and Richardson 2020). See also:\n * blog entry\n * GitHub repo\n * paper"
] |
text-generation
|
transformers
|
# CRiPT Model (Critical Thinking Intermediarily Pretrained Transformer)
Small version of the trained model (`SYL01-2020-10-24-72K/gpt2-small-train03-72K`) presented in the paper "Critical Thinking for Language Models" (Betz, Voigt and Richardson 2020). See also:
* [blog entry](https://debatelab.github.io/journal/critical-thinking-language-models.html)
* [GitHub repo](https://github.com/debatelab/aacorpus)
* [paper](https://arxiv.org/pdf/2009.07185)
|
{"language": "en", "tags": ["gpt2"]}
|
DebateLabKIT/cript
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"arxiv:2009.07185",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2009.07185"
] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #en #arxiv-2009.07185 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# CRiPT Model (Critical Thinking Intermediarily Pretrained Transformer)
Small version of the trained model ('SYL01-2020-10-24-72K/gpt2-small-train03-72K') presented in the paper "Critical Thinking for Language Models" (Betz, Voigt and Richardson 2020). See also:
* blog entry
* GitHub repo
* paper
|
[
"# CRiPT Model (Critical Thinking Intermediarily Pretrained Transformer)\n\nSmall version of the trained model ('SYL01-2020-10-24-72K/gpt2-small-train03-72K') presented in the paper \"Critical Thinking for Language Models\" (Betz, Voigt and Richardson 2020). See also:\n\n * blog entry\n * GitHub repo\n * paper"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #en #arxiv-2009.07185 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# CRiPT Model (Critical Thinking Intermediarily Pretrained Transformer)\n\nSmall version of the trained model ('SYL01-2020-10-24-72K/gpt2-small-train03-72K') presented in the paper \"Critical Thinking for Language Models\" (Betz, Voigt and Richardson 2020). See also:\n\n * blog entry\n * GitHub repo\n * paper"
] |
text-classification
|
transformers
|
This model has been trained for the purpose of classifying text from different domains. Currently it is trained with much lesser data and it has been trained to identify text from 3 domains, "sports", "healthcare" and "financial". Label_0 represents "financial", Label_1 represents "Healthcare" and Label_2 represents "Sports". Currently I have trained it with these 3 domains only, I am pretty soon planning to train it on more domains and more data, hence its accuracy will improve further too.
|
{}
|
debjyoti007/new_doc_classifier
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
This model has been trained for the purpose of classifying text from different domains. Currently it is trained with much lesser data and it has been trained to identify text from 3 domains, "sports", "healthcare" and "financial". Label_0 represents "financial", Label_1 represents "Healthcare" and Label_2 represents "Sports". Currently I have trained it with these 3 domains only, I am pretty soon planning to train it on more domains and more data, hence its accuracy will improve further too.
|
[] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 38639804
- CO2 Emissions (in grams): 11.98841452241473
## Validation Metrics
- Loss: 0.421400249004364
- Accuracy: 0.86783988957902
- Macro F1: 0.8669477050676501
- Micro F1: 0.86783988957902
- Weighted F1: 0.86694770506765
- Macro Precision: 0.867606300132228
- Micro Precision: 0.86783988957902
- Weighted Precision: 0.8676063001322278
- Macro Recall: 0.86783988957902
- Micro Recall: 0.86783988957902
- Weighted Recall: 0.86783988957902
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/dee4hf/autonlp-shajBERT-38639804
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("dee4hf/autonlp-shajBERT-38639804", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("dee4hf/autonlp-shajBERT-38639804", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["dee4hf/autonlp-data-shajBERT"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 11.98841452241473}
|
dee4hf/autonlp-shajBERT-38639804
| null |
[
"transformers",
"pytorch",
"albert",
"text-classification",
"autonlp",
"unk",
"dataset:dee4hf/autonlp-data-shajBERT",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #albert #text-classification #autonlp #unk #dataset-dee4hf/autonlp-data-shajBERT #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 38639804
- CO2 Emissions (in grams): 11.98841452241473
## Validation Metrics
- Loss: 0.421400249004364
- Accuracy: 0.86783988957902
- Macro F1: 0.8669477050676501
- Micro F1: 0.86783988957902
- Weighted F1: 0.86694770506765
- Macro Precision: 0.867606300132228
- Micro Precision: 0.86783988957902
- Weighted Precision: 0.8676063001322278
- Macro Recall: 0.86783988957902
- Micro Recall: 0.86783988957902
- Weighted Recall: 0.86783988957902
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 38639804\n- CO2 Emissions (in grams): 11.98841452241473",
"## Validation Metrics\n\n- Loss: 0.421400249004364\n- Accuracy: 0.86783988957902\n- Macro F1: 0.8669477050676501\n- Micro F1: 0.86783988957902\n- Weighted F1: 0.86694770506765\n- Macro Precision: 0.867606300132228\n- Micro Precision: 0.86783988957902\n- Weighted Precision: 0.8676063001322278\n- Macro Recall: 0.86783988957902\n- Micro Recall: 0.86783988957902\n- Weighted Recall: 0.86783988957902",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #albert #text-classification #autonlp #unk #dataset-dee4hf/autonlp-data-shajBERT #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 38639804\n- CO2 Emissions (in grams): 11.98841452241473",
"## Validation Metrics\n\n- Loss: 0.421400249004364\n- Accuracy: 0.86783988957902\n- Macro F1: 0.8669477050676501\n- Micro F1: 0.86783988957902\n- Weighted F1: 0.86694770506765\n- Macro Precision: 0.867606300132228\n- Micro Precision: 0.86783988957902\n- Weighted Precision: 0.8676063001322278\n- Macro Recall: 0.86783988957902\n- Micro Recall: 0.86783988957902\n- Weighted Recall: 0.86783988957902",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
null | null |
trying to create my first BERT model
|
{}
|
dee4hf/deeBERT
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
trying to create my first BERT model
|
[] |
[
"TAGS\n#region-us \n"
] |
text2text-generation
|
transformers
|
## Model description
T5 model trained for Grammar Correction. This model corrects grammatical mistakes in input sentences
### Dataset Description
The T5-base model has been trained on C4_200M dataset.
### Model in Action 🚀
```
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'deep-learning-analytics/GrammarCorrector'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name).to(torch_device)
def correct_grammar(input_text,num_return_sequences):
batch = tokenizer([input_text],truncation=True,padding='max_length',max_length=64, return_tensors="pt").to(torch_device)
translated = model.generate(**batch,max_length=64,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
return tgt_text
```
### Example Usage
```
text = 'He are moving here.'
print(correct_grammar(text, num_return_sequences=2))
['He is moving here.', 'He is moving here now.']
```
Another example
```
text = 'Cat drinked milk'
print(correct_grammar(text, num_return_sequences=2))
['Cat drank milk.', 'Cat drink milk.']
```
Model Developed by [Priya-Dwivedi](https://www.linkedin.com/in/priyanka-dwivedi-6864362)
|
{}
|
deep-learning-analytics/GrammarCorrector
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
## Model description
T5 model trained for Grammar Correction. This model corrects grammatical mistakes in input sentences
### Dataset Description
The T5-base model has been trained on C4_200M dataset.
### Model in Action
### Example Usage
Another example
Model Developed by Priya-Dwivedi
|
[
"## Model description\nT5 model trained for Grammar Correction. This model corrects grammatical mistakes in input sentences",
"### Dataset Description\nThe T5-base model has been trained on C4_200M dataset.",
"### Model in Action",
"### Example Usage\n\n\nAnother example\n\n\nModel Developed by Priya-Dwivedi"
] |
[
"TAGS\n#transformers #pytorch #tf #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## Model description\nT5 model trained for Grammar Correction. This model corrects grammatical mistakes in input sentences",
"### Dataset Description\nThe T5-base model has been trained on C4_200M dataset.",
"### Model in Action",
"### Example Usage\n\n\nAnother example\n\n\nModel Developed by Priya-Dwivedi"
] |
question-answering
|
transformers
|
# Model name
Closed Book Trivia-QA T5 base
## Model description
This is a T5-base model trained on No Context Trivia QA data set. The input to the model is a Trivia type question. The model is tuned to search for the answer in its memory to return it. The pretrained model used here was trained on Common Crawl (C4) data set. The model was trained for 135 epochs using a batch size of 32 and learning rate of 1e-3. Max_input_lngth is set as 25 and max_output_length is 10. Model attained an EM score of 17 and a Subset Match score of 24.5
We have written a blog post that covers the training procedure. Please find it [here](https://medium.com/@priya.dwivedi/build-a-trivia-bot-using-t5-transformer-345ff83205b6).
Test the model on Trivia Questions from the websites below:
https://www.triviaquestionss.com/easy-trivia-questions/
https://laffgaff.com/easy-trivia-questions-and-answers/
## Usage
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("deep-learning-analytics/triviaqa-t5-base")
model = AutoModelWithLMHead.from_pretrained("deep-learning-analytics/triviaqa-t5-base")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
text = "Who directed the movie Jaws?"
preprocess_text = text.strip().replace("\n","")
tokenized_text = tokenizer.encode(preprocess_text, return_tensors="pt").to(device)
outs = model.model.generate(
tokenized_text,
max_length=10,
num_beams=2,
early_stopping=True
)
dec = [tokenizer.decode(ids) for ids in outs]
print("Predicted Answer: ", dec)
```
|
{"language": "eng", "tags": ["triviaqa", "t5-base", "pytorch", "lm-head", "question-answering", "closed-book", "t5", "pipeline:question-answering"], "datasets": ["triviaqa"], "metrics": [{"EM": 17}, {"Subset match": 24.5}], "widget": [{"text": ["Mount Everest is found in which mountain range?", "None"]}]}
|
deep-learning-analytics/triviaqa-t5-base
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"triviaqa",
"t5-base",
"lm-head",
"question-answering",
"closed-book",
"pipeline:question-answering",
"eng",
"dataset:triviaqa",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"eng"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #triviaqa #t5-base #lm-head #question-answering #closed-book #pipeline-question-answering #eng #dataset-triviaqa #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model name
Closed Book Trivia-QA T5 base
## Model description
This is a T5-base model trained on No Context Trivia QA data set. The input to the model is a Trivia type question. The model is tuned to search for the answer in its memory to return it. The pretrained model used here was trained on Common Crawl (C4) data set. The model was trained for 135 epochs using a batch size of 32 and learning rate of 1e-3. Max_input_lngth is set as 25 and max_output_length is 10. Model attained an EM score of 17 and a Subset Match score of 24.5
We have written a blog post that covers the training procedure. Please find it here.
Test the model on Trivia Questions from the websites below:
URL
URL
## Usage
|
[
"# Model name\nClosed Book Trivia-QA T5 base",
"## Model description\n\nThis is a T5-base model trained on No Context Trivia QA data set. The input to the model is a Trivia type question. The model is tuned to search for the answer in its memory to return it. The pretrained model used here was trained on Common Crawl (C4) data set. The model was trained for 135 epochs using a batch size of 32 and learning rate of 1e-3. Max_input_lngth is set as 25 and max_output_length is 10. Model attained an EM score of 17 and a Subset Match score of 24.5\nWe have written a blog post that covers the training procedure. Please find it here. \n\nTest the model on Trivia Questions from the websites below:\nURL\nURL",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #triviaqa #t5-base #lm-head #question-answering #closed-book #pipeline-question-answering #eng #dataset-triviaqa #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model name\nClosed Book Trivia-QA T5 base",
"## Model description\n\nThis is a T5-base model trained on No Context Trivia QA data set. The input to the model is a Trivia type question. The model is tuned to search for the answer in its memory to return it. The pretrained model used here was trained on Common Crawl (C4) data set. The model was trained for 135 epochs using a batch size of 32 and learning rate of 1e-3. Max_input_lngth is set as 25 and max_output_length is 10. Model attained an EM score of 17 and a Subset Match score of 24.5\nWe have written a blog post that covers the training procedure. Please find it here. \n\nTest the model on Trivia Questions from the websites below:\nURL\nURL",
"## Usage"
] |
summarization
|
transformers
|
# Model name
Wikihow T5-small
## Model description
This is a T5-small model trained on Wikihow All data set. The model was trained for 3 epochs using a batch size of 16 and learning rate of 3e-4. Max_input_lngth is set as 512 and max_output_length is 150. Model attained a Rouge1 score of 31.2 and RougeL score of 24.5.
We have written a blog post that covers the training procedure. Please find it [here](https://medium.com/@priya.dwivedi/fine-tuning-a-t5-transformer-for-any-summarization-task-82334c64c81).
## Usage
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("deep-learning-analytics/wikihow-t5-small")
model = AutoModelWithLMHead.from_pretrained("deep-learning-analytics/wikihow-t5-small")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
text = """"
Lack of fluids can lead to dry mouth, which is a leading cause of bad breath. Water
can also dilute any chemicals in your mouth or gut that are causing bad breath., Studies show that
eating 6 ounces of yogurt a day reduces the level of odor-causing compounds in the mouth. In
particular, look for yogurt containing the active bacteria Streptococcus thermophilus or
Lactobacillus bulgaricus., The abrasive nature of fibrous fruits and vegetables helps to clean
teeth, while the vitamins, antioxidants, and acids they contain improve dental health.Foods that can
be particularly helpful include:Apples — Apples contain vitamin C, which is necessary for health
gums, as well as malic acid, which helps to whiten teeth.Carrots — Carrots are rich in vitamin A,
which strengthens tooth enamel.Celery — Chewing celery produces a lot of saliva, which helps to
neutralize bacteria that cause bad breath.Pineapples — Pineapples contain bromelain, an enzyme that
cleans the mouth., These teas have been shown to kill the bacteria that cause bad breath and
plaque., An upset stomach can lead to burping, which contributes to bad breath. Don’t eat foods that
upset your stomach, or if you do, use antacids. If you are lactose intolerant, try lactase tablets.,
They can all cause bad breath. If you do eat them, bring sugar-free gum or a toothbrush and
toothpaste to freshen your mouth afterwards., Diets low in carbohydrates lead to ketosis — a state
in which the body burns primarily fat instead of carbohydrates for energy. This may be good for your
waistline, but it also produces chemicals called ketones, which contribute to bad breath.To stop the
problem, you must change your diet. Or, you can combat the smell in one of these ways:Drink lots of
water to dilute the ketones.Chew sugarless gum or suck on sugarless mints.Chew mint leaves.
"""
preprocess_text = text.strip().replace("\n","")
tokenized_text = tokenizer.encode(preprocess_text, return_tensors="pt").to(device)
summary_ids = model.generate(
tokenized_text,
max_length=150,
num_beams=2,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print ("\n\nSummarized text: \n",output)
```
|
{"language": "eng", "tags": ["wikihow", "t5-small", "pytorch", "lm-head", "seq2seq", "t5", "pipeline:summarization", "summarization"], "datasets": ["Wikihow"], "metrics": [{"Rouge1": 31.2}, {"RougeL": 24.5}], "widget": [{"text": "Lack of fluids can lead to dry mouth, which is a leading cause of bad breath. Water can also dilute any chemicals in your mouth or gut that are causing bad breath., Studies show that eating 6 ounces of yogurt a day reduces the level of odor-causing compounds in the mouth. In particular, look for yogurt containing the active bacteria Streptococcus thermophilus or Lactobacillus bulgaricus., The abrasive nature of fibrous fruits and vegetables helps to clean teeth, while the vitamins, antioxidants, and acids they contain improve dental health.Foods that can be particularly helpful include:Apples \u2014 Apples contain vitamin C, which is necessary for health gums, as well as malic acid, which helps to whiten teeth.Carrots \u2014 Carrots are rich in vitamin A, which strengthens tooth enamel.Celery \u2014 Chewing celery produces a lot of saliva, which helps to neutralize bacteria that cause bad breath.Pineapples \u2014 Pineapples contain bromelain, an enzyme that cleans the mouth., These teas have been shown to kill the bacteria that cause bad breath and plaque., An upset stomach can lead to burping, which contributes to bad breath. Don\u2019t eat foods that upset your stomach, or if you do, use antacids. If you are lactose intolerant, try lactase tablets., They can all cause bad breath. If you do eat them, bring sugar-free gum or a toothbrush and toothpaste to freshen your mouth afterwards., Diets low in carbohydrates lead to ketosis \u2014 a state in which the body burns primarily fat instead of carbohydrates for energy. This may be good for your waistline, but it also produces chemicals called ketones, which contribute to bad breath.To stop the problem, you must change your diet. Or, you can combat the smell in one of these ways:Drink lots of water to dilute the ketones.Chew sugarless gum or suck on sugarless mints.Chew mint leaves."}, {"text": " Bring 1/2 cup water to the boil.Add the fresh or dried rosemary to the water.Remove from the heat. Set aside for 1/2 an hour to infuse. Added flavour can be released by pressing down on the rosemary leaves with a spoon. Add the pieces to the blender or food processor with the elderflower cordial. Blend or process to a pur\u00e9e.,, Add the lemon or lime juice and stir to combine., Add a cover and place in the freezer.After 2 hours, remove from the freezer and break up with a fork. This helps the ice crystals to form properly.Continue doing this every hour until the granita freezes properly. Scoop the granita into dessert bowls and serve. Garnish with a cucumber curl or a small sprig of rosemary."}]}
|
deep-learning-analytics/wikihow-t5-small
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"wikihow",
"t5-small",
"lm-head",
"seq2seq",
"pipeline:summarization",
"summarization",
"eng",
"dataset:Wikihow",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"eng"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #wikihow #t5-small #lm-head #seq2seq #pipeline-summarization #summarization #eng #dataset-Wikihow #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Model name
Wikihow T5-small
## Model description
This is a T5-small model trained on Wikihow All data set. The model was trained for 3 epochs using a batch size of 16 and learning rate of 3e-4. Max_input_lngth is set as 512 and max_output_length is 150. Model attained a Rouge1 score of 31.2 and RougeL score of 24.5.
We have written a blog post that covers the training procedure. Please find it here.
## Usage
|
[
"# Model name\nWikihow T5-small",
"## Model description\n\nThis is a T5-small model trained on Wikihow All data set. The model was trained for 3 epochs using a batch size of 16 and learning rate of 3e-4. Max_input_lngth is set as 512 and max_output_length is 150. Model attained a Rouge1 score of 31.2 and RougeL score of 24.5. \nWe have written a blog post that covers the training procedure. Please find it here.",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #wikihow #t5-small #lm-head #seq2seq #pipeline-summarization #summarization #eng #dataset-Wikihow #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Model name\nWikihow T5-small",
"## Model description\n\nThis is a T5-small model trained on Wikihow All data set. The model was trained for 3 epochs using a batch size of 16 and learning rate of 3e-4. Max_input_lngth is set as 512 and max_output_length is 150. Model attained a Rouge1 score of 31.2 and RougeL score of 24.5. \nWe have written a blog post that covers the training procedure. Please find it here.",
"## Usage"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-squad-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased-distilled-squad](https://huggingface.co/distilbert-base-uncased-distilled-squad) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "distilbert-base-uncased-distilled-squad-finetuned-squad", "results": []}]}
|
deepakvk/distilbert-base-uncased-distilled-squad-finetuned-squad
| null |
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us
|
# distilbert-base-uncased-distilled-squad-finetuned-squad
This model is a fine-tuned version of distilbert-base-uncased-distilled-squad on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.1
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# distilbert-base-uncased-distilled-squad-finetuned-squad\n\nThis model is a fine-tuned version of distilbert-base-uncased-distilled-squad on the squad_v2 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 0.1",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #distilbert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# distilbert-base-uncased-distilled-squad-finetuned-squad\n\nThis model is a fine-tuned version of distilbert-base-uncased-distilled-squad on the squad_v2 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 0.1",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
fill-mask
|
transformers
|
# Welcome to Roberta-Marathi-MLM
## Model Description
> This is a small language model for [Marathi](https://en.wikipedia.org/wiki/Marathi) language with 1M data samples taken from
[OSCAR page](https://oscar-public.huma-num.fr/shuffled/mr_dedup.txt.gz)
## Training params
- **Dataset** - 1M data samples are used to train this model from OSCAR page(https://oscar-corpus.com/) eventhough data set is of 2.7 GB due to resource constraint to train
I have picked only 1M data from the total 2.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so.
- **Preprocessing** - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by 🤗
<!-- - **Hyperparameters** - __ByteLevelBPETokenizer__ : vocabulary size = 52_000 and min_frequency = 2
__Trainer__ : num_train_epochs=12 - trained for 12 epochs
per_gpu_train_batch_size=64 - batch size for the datasamples is 64
save_steps=10_000 - save model for every 10k steps
save_total_limit=2 - save limit is set for 2 -->
**Intended uses & limitations**
this is for anyone who wants to make use of marathi language models for various tasks like language generation, translation and many more use cases.
**Whatever else is helpful!**
If you are intersted in collaboration feel free to reach me [Deepam](mailto:deepam8155@gmail.com)
|
{"language": "mr"}
|
deepampatel/roberta-mlm-marathi
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"mr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"mr"
] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #mr #autotrain_compatible #endpoints_compatible #region-us
|
# Welcome to Roberta-Marathi-MLM
## Model Description
> This is a small language model for Marathi language with 1M data samples taken from
OSCAR page
## Training params
- Dataset - 1M data samples are used to train this model from OSCAR page(URL eventhough data set is of 2.7 GB due to resource constraint to train
I have picked only 1M data from the total 2.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so.
- Preprocessing - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by 🤗
Intended uses & limitations
this is for anyone who wants to make use of marathi language models for various tasks like language generation, translation and many more use cases.
Whatever else is helpful!
If you are intersted in collaboration feel free to reach me Deepam
|
[
"# Welcome to Roberta-Marathi-MLM",
"## Model Description\n \n> This is a small language model for Marathi language with 1M data samples taken from\n OSCAR page",
"## Training params \n\n- Dataset - 1M data samples are used to train this model from OSCAR page(URL eventhough data set is of 2.7 GB due to resource constraint to train \nI have picked only 1M data from the total 2.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so.\n\n- Preprocessing - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by 🤗 \n\n\nIntended uses & limitations\n this is for anyone who wants to make use of marathi language models for various tasks like language generation, translation and many more use cases.\n\nWhatever else is helpful!\n If you are intersted in collaboration feel free to reach me Deepam"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #mr #autotrain_compatible #endpoints_compatible #region-us \n",
"# Welcome to Roberta-Marathi-MLM",
"## Model Description\n \n> This is a small language model for Marathi language with 1M data samples taken from\n OSCAR page",
"## Training params \n\n- Dataset - 1M data samples are used to train this model from OSCAR page(URL eventhough data set is of 2.7 GB due to resource constraint to train \nI have picked only 1M data from the total 2.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so.\n\n- Preprocessing - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by 🤗 \n\n\nIntended uses & limitations\n this is for anyone who wants to make use of marathi language models for various tasks like language generation, translation and many more use cases.\n\nWhatever else is helpful!\n If you are intersted in collaboration feel free to reach me Deepam"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8789
- Wer: 1.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["ab"], "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_7_0", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "output", "results": []}]}
|
deepdml/output
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ab"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us
|
# output
This model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8789
- Wer: 1.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
[
"# output\n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 156.8789\n- Wer: 1.3456",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_7_0 #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us \n",
"# output\n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 156.8789\n- Wer: 1.3456",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4798
- Wer: 0.3474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5229 | 4.0 | 500 | 1.6557 | 1.0422 |
| 0.6618 | 8.0 | 1000 | 0.4420 | 0.4469 |
| 0.2211 | 12.0 | 1500 | 0.4705 | 0.4002 |
| 0.1281 | 16.0 | 2000 | 0.4347 | 0.3688 |
| 0.0868 | 20.0 | 2500 | 0.4653 | 0.3590 |
| 0.062 | 24.0 | 3000 | 0.4747 | 0.3519 |
| 0.0472 | 28.0 | 3500 | 0.4798 | 0.3474 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
|
deepdml/wav2vec2-base-timit-demo-colab
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-timit-demo-colab
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4798
* Wer: 0.3474
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.9.0+cu102
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.9.0+cu102\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-basque
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4276
- Wer: 0.5962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9902 | 1.29 | 400 | 2.1257 | 1.0 |
| 0.9625 | 2.59 | 800 | 0.5695 | 0.7452 |
| 0.4605 | 3.88 | 1200 | 0.4276 | 0.5962 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": "eu", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "basque", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-basque", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "eu"}, "metrics": [{"type": "wer", "value": 51.89, "name": "Test WER"}, {"type": "cer", "value": 10.01, "name": "Test CER"}]}]}]}
|
deepdml/wav2vec2-large-xls-r-300m-basque
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"basque",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"eu",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"eu"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #basque #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #eu #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-basque
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4276
* Wer: 0.5962
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 4
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #basque #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #eu #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
null | null |
# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis
The model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script.
The Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP.
A second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 20K iterations.
Performance of this model is now superior to the Tensorpack model.
Please check: [Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis](https://arxiv.org/abs/1908.07836).
This model is different from the model used the paper.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this [this model card](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet).
|
{"license": "apache-2.0", "tags": ["Pytorch"], "datasets": ["Publaynet"]}
|
deepdoctection/d2_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet_inference_only
| null |
[
"Pytorch",
"dataset:Publaynet",
"arxiv:1908.07836",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.07836"
] |
[] |
TAGS
#Pytorch #dataset-Publaynet #arxiv-1908.07836 #license-apache-2.0 #region-us
|
# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis
The model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script.
The Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP.
A second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 20K iterations.
Performance of this model is now superior to the Tensorpack model.
Please check: Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis.
This model is different from the model used the paper.
The code has been adapted so that it can be used in a deepdoctection pipeline.
## How this model can be used
This model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this this model card.
|
[
"# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis\n\nThe model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script. \nThe Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP.\n\nA second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 20K iterations.\nPerformance of this model is now superior to the Tensorpack model. \n\nPlease check: Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis. \n\nThis model is different from the model used the paper. \n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## This is an inference model only\n\nTo reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this this model card."
] |
[
"TAGS\n#Pytorch #dataset-Publaynet #arxiv-1908.07836 #license-apache-2.0 #region-us \n",
"# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis\n\nThe model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script. \nThe Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP.\n\nA second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 20K iterations.\nPerformance of this model is now superior to the Tensorpack model. \n\nPlease check: Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis. \n\nThis model is different from the model used the paper. \n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## This is an inference model only\n\nTo reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this this model card."
] |
null | null |
# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script.
The Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP.
A second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 50K iterations.
Performance of this model is now superior to the Tensorpack model.
Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683).
The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before
detecting cells.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this [this model card](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c).
|
{"license": "apache-2.0", "tags": ["Pytorch"], "datasets": ["Pubtabnet"]}
|
deepdoctection/d2_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c_inference_only
| null |
[
"Pytorch",
"dataset:Pubtabnet",
"arxiv:1911.10683",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.10683"
] |
[] |
TAGS
#Pytorch #dataset-Pubtabnet #arxiv-1911.10683 #license-apache-2.0 #region-us
|
# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script.
The Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP.
A second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 50K iterations.
Performance of this model is now superior to the Tensorpack model.
Regarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation.
The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before
detecting cells.
The code has been adapted so that it can be used in a deepdoctection pipeline.
## How this model can be used
This model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this this model card.
|
[
"# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. \n\nThe model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script. \nThe Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP. \n\nA second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 50K iterations.\nPerformance of this model is now superior to the Tensorpack model. \n\nRegarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation. \n\nThe model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before \ndetecting cells.\n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## This is an inference model only\n\nTo reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this this model card."
] |
[
"TAGS\n#Pytorch #dataset-Pubtabnet #arxiv-1911.10683 #license-apache-2.0 #region-us \n",
"# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. \n\nThe model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script. \nThe Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP. \n\nA second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 50K iterations.\nPerformance of this model is now superior to the Tensorpack model. \n\nRegarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation. \n\nThe model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before \ndetecting cells.\n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## This is an inference model only\n\nTo reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this this model card."
] |
null | null |
# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script.
The Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP.
A second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 20K iterations. Performance of this model is now superior to the Tensorpack model.
Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683).
The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are
calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this [this model card](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc).
|
{"license": "apache-2.0", "tags": ["Pytorch"], "datasets": ["Pubtabnet"]}
|
deepdoctection/d2_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc_inference_only
| null |
[
"Pytorch",
"dataset:Pubtabnet",
"arxiv:1911.10683",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.10683"
] |
[] |
TAGS
#Pytorch #dataset-Pubtabnet #arxiv-1911.10683 #license-apache-2.0 #region-us
|
# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script.
The Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP.
A second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 20K iterations. Performance of this model is now superior to the Tensorpack model.
Regarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation.
The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are
calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.
The code has been adapted so that it can be used in a deepdoctection pipeline.
## How this model can be used
This model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this this model card.
|
[
"# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. \n\nThe model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script. \nThe Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP. \n\nA second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 20K iterations. Performance of this model is now superior to the Tensorpack model.\n\nRegarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation. \n\nThe model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are\ncalculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.\n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## This is an inference model only\n\nTo reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this this model card."
] |
[
"TAGS\n#Pytorch #dataset-Pubtabnet #arxiv-1911.10683 #license-apache-2.0 #region-us \n",
"# Detectron2 Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. \n\nThe model and has been trained with the Tensorflow training toolkit Tensorpack and then transferred to Pytorch using a conversion script. \nThe Tensorflow and Pytorch models differ slightly (padding ...), however validating both models give a difference of less than 0.03 mAP. \n\nA second model has been added where the Tensorpack model has been used as initial checkpoint and training has been resumed for 20K iterations. Performance of this model is now superior to the Tensorpack model.\n\nRegarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation. \n\nThe model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are\ncalculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.\n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## This is an inference model only\n\nTo reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please use Tensorflow, as well as its training script. More information can be found in this this model card."
] |
null | null |
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis
The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) .
Please check: [Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis](https://arxiv.org/abs/1908.07836).
This model is different from the model used the paper.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## How this model was trained.
To recreate the model run on the **deep**doctection framework, run:
```python
>>> import os
>>> from deep_doctection.datasets import DatasetRegistry
>>> from deep_doctection.eval import MetricRegistry
>>> from deep_doctection.utils import get_configs_dir_path
>>> from deep_doctection.train import train_faster_rcnn
publaynet = DatasetRegistry.get_dataset("publaynet")
path_config_yaml=os.path.join(get_configs_dir_path(),"tp/layout/conf_frcnn_layout.yaml")
path_weights = ""
dataset_train = publaynet
config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.EVAL_PERIOD=200","TRAIN.STARTING_EPOCH=1",
"PREPROC.TRAIN_SHORT_EDGE_SIZE=[800,1200]","TRAIN.CHECKPOINT_PERIOD=50",
"BACKBONE.FREEZE_AT=0"]
build_train_config=["max_datapoints=335703"]
dataset_val = publaynet
build_val_config = ["max_datapoints=2000"]
coco_metric = MetricRegistry.get_metric("coco")
train_faster_rcnn(path_config_yaml=path_config_yaml,
dataset_train=dataset_train,
path_weights=path_weights,
config_overwrite=config_overwrite,
log_dir="/path/to/dir",
build_train_config=build_train_config,
dataset_val=dataset_val,
build_val_config=build_val_config,
metric=coco_metric,
pipeline_component_name="ImageLayoutService"
)
```
## How to fine-tune this model
To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
|
{"license": "apache-2.0", "tags": ["Tensorflow"], "datasets": ["Publaynet"]}
|
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet
| null |
[
"Tensorflow",
"dataset:Publaynet",
"arxiv:1908.07836",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.07836"
] |
[] |
TAGS
#Tensorflow #dataset-Publaynet #arxiv-1908.07836 #license-apache-2.0 #region-us
|
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis
The model and its training code has been mainly taken from: Tensorpack .
Please check: Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis.
This model is different from the model used the paper.
The code has been adapted so that it can be used in a deepdoctection pipeline.
## How this model can be used
This model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.
## How this model was trained.
To recreate the model run on the deepdoctection framework, run:
## How to fine-tune this model
To fine tune this model, please check this Fine-tune tutorial.
|
[
"# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis\n\nThe model and its training code has been mainly taken from: Tensorpack . \n\nPlease check: Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis. \n\nThis model is different from the model used the paper. \n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## How this model was trained. \n\nTo recreate the model run on the deepdoctection framework, run:",
"## How to fine-tune this model\n\nTo fine tune this model, please check this Fine-tune tutorial."
] |
[
"TAGS\n#Tensorflow #dataset-Publaynet #arxiv-1908.07836 #license-apache-2.0 #region-us \n",
"# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis\n\nThe model and its training code has been mainly taken from: Tensorpack . \n\nPlease check: Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis. \n\nThis model is different from the model used the paper. \n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## How this model was trained. \n\nTo recreate the model run on the deepdoctection framework, run:",
"## How to fine-tune this model\n\nTo fine tune this model, please check this Fine-tune tutorial."
] |
null | null |
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis
The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) .
Please check: [Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis](https://arxiv.org/abs/1908.07836).
This model is different from the model used the paper.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check [this model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet).
## How this model was trained.
To recreate the model run on the **deep**doctection framework, run:
```python
>>> import os
>>> from deep_doctection.datasets import DatasetRegistry
>>> from deep_doctection.eval import MetricRegistry
>>> from deep_doctection.utils import get_configs_dir_path
>>> from deep_doctection.train import train_faster_rcnn
publaynet = DatasetRegistry.get_dataset("publaynet")
path_config_yaml=os.path.join(get_configs_dir_path(),"tp/layout/conf_frcnn_layout.yaml")
path_weights = ""
dataset_train = publaynet
config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.EVAL_PERIOD=200","TRAIN.STARTING_EPOCH=1",
"PREPROC.TRAIN_SHORT_EDGE_SIZE=[800,1200]","TRAIN.CHECKPOINT_PERIOD=50",
"BACKBONE.FREEZE_AT=0"]
build_train_config=["max_datapoints=335703"]
dataset_val = publaynet
build_val_config = ["max_datapoints=2000"]
coco_metric = MetricRegistry.get_metric("coco")
train_faster_rcnn(path_config_yaml=path_config_yaml,
dataset_train=dataset_train,
path_weights=path_weights,
config_overwrite=config_overwrite,
log_dir="/path/to/dir",
build_train_config=build_train_config,
dataset_val=dataset_val,
build_val_config=build_val_config,
metric=coco_metric,
pipeline_component_name="ImageLayoutService"
)
```
|
{"license": "apache-2.0", "tags": ["Tensorflow"], "datasets": ["Publaynet"]}
|
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_publaynet_inference_only
| null |
[
"Tensorflow",
"dataset:Publaynet",
"arxiv:1908.07836",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.07836"
] |
[] |
TAGS
#Tensorflow #dataset-Publaynet #arxiv-1908.07836 #license-apache-2.0 #region-us
|
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis
The model and its training code has been mainly taken from: Tensorpack .
Please check: Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis.
This model is different from the model used the paper.
The code has been adapted so that it can be used in a deepdoctection pipeline.
## How this model can be used
This model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this model.
## How this model was trained.
To recreate the model run on the deepdoctection framework, run:
|
[
"# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis\n\nThe model and its training code has been mainly taken from: Tensorpack . \n\nPlease check: Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis. \n\nThis model is different from the model used the paper. \n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## This is an inference model only\n\nTo reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this model.",
"## How this model was trained. \n\nTo recreate the model run on the deepdoctection framework, run:"
] |
[
"TAGS\n#Tensorflow #dataset-Publaynet #arxiv-1908.07836 #license-apache-2.0 #region-us \n",
"# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Publaynet for Document Layout Analysis\n\nThe model and its training code has been mainly taken from: Tensorpack . \n\nPlease check: Xu Zhong et. all. - PubLayNet: largest dataset ever for document layout analysis. \n\nThis model is different from the model used the paper. \n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## This is an inference model only\n\nTo reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this model.",
"## How this model was trained. \n\nTo recreate the model run on the deepdoctection framework, run:"
] |
null | null |
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) .
Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683).
The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before
detecting cells.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## How this model was trained.
To recreate the model run on the **deep**doctection framework, run:
```python
>>> import os
>>> from deep_doctection.datasets import DatasetRegistry
>>> from deep_doctection.eval import MetricRegistry
>>> from deep_doctection.utils import get_configs_dir_path
>>> from deep_doctection.train import train_faster_rcnn
pubtabnet = DatasetRegistry.get_dataset("pubtabnet")
pubtabnet.dataflow.categories.filter_categories(categories="CELL")
path_config_yaml=os.path.join(get_configs_dir_path(),"tp/cell/conf_frcnn_cell.yaml")
path_weights = ""
dataset_train = pubtabnet
config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1",
"TRAIN.CHECKPOINT_PERIOD=50","BACKBONE.FREEZE_AT=0", "PREPROC.TRAIN_SHORT_EDGE_SIZE=[200,600]"]
build_train_config=["max_datapoints=500000"]
dataset_val = pubtabnet
build_val_config = ["max_datapoints=4000"]
coco_metric = MetricRegistry.get_metric("coco")
coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]])
train_faster_rcnn(path_config_yaml=path_config_yaml,
dataset_train=dataset_train,
path_weights=path_weights,
config_overwrite=config_overwrite,
log_dir="/path/to/dir",
build_train_config=build_train_config,
dataset_val=dataset_val,
build_val_config=build_val_config,
metric=coco_metric,
pipeline_component_name="ImageLayoutService"
)
```
## How to fine-tune this model
To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
|
{"license": "apache-2.0", "tags": ["Tensorflow"], "datasets": ["Pubtabnet"]}
|
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c
| null |
[
"Tensorflow",
"dataset:Pubtabnet",
"arxiv:1911.10683",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.10683"
] |
[] |
TAGS
#Tensorflow #dataset-Pubtabnet #arxiv-1911.10683 #license-apache-2.0 #region-us
|
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: Tensorpack .
Regarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation.
The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before
detecting cells.
The code has been adapted so that it can be used in a deepdoctection pipeline.
## How this model can be used
This model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.
## How this model was trained.
To recreate the model run on the deepdoctection framework, run:
## How to fine-tune this model
To fine tune this model, please check this Fine-tune tutorial.
|
[
"# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. \n\nThe model and its training code has been mainly taken from: Tensorpack . \n\nRegarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation. \n\nThe model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before \ndetecting cells.\n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## How this model was trained. \n\nTo recreate the model run on the deepdoctection framework, run:",
"## How to fine-tune this model\n\nTo fine tune this model, please check this Fine-tune tutorial."
] |
[
"TAGS\n#Tensorflow #dataset-Pubtabnet #arxiv-1911.10683 #license-apache-2.0 #region-us \n",
"# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. \n\nThe model and its training code has been mainly taken from: Tensorpack . \n\nRegarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation. \n\nThe model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before \ndetecting cells.\n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## How this model was trained. \n\nTo recreate the model run on the deepdoctection framework, run:",
"## How to fine-tune this model\n\nTo fine tune this model, please check this Fine-tune tutorial."
] |
null | null |
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) .
Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683).
The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before
detecting cells.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this [model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c) .
## How this model was trained.
To recreate the model run on the **deep**doctection framework, run:
```python
>>> import os
>>> from deep_doctection.datasets import DatasetRegistry
>>> from deep_doctection.eval import MetricRegistry
>>> from deep_doctection.utils import get_configs_dir_path
>>> from deep_doctection.train import train_faster_rcnn
pubtabnet = DatasetRegistry.get_dataset("pubtabnet")
pubtabnet.dataflow.categories.filter_categories(categories="CELL")
path_config_yaml=os.path.join(get_configs_dir_path(),"tp/cell/conf_frcnn_cell.yaml")
path_weights = ""
dataset_train = pubtabnet
config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1",
"TRAIN.CHECKPOINT_PERIOD=50","BACKBONE.FREEZE_AT=0", "PREPROC.TRAIN_SHORT_EDGE_SIZE=[200,600]"]
build_train_config=["max_datapoints=500000"]
dataset_val = pubtabnet
build_val_config = ["max_datapoints=4000"]
coco_metric = MetricRegistry.get_metric("coco")
coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]])
train_faster_rcnn(path_config_yaml=path_config_yaml,
dataset_train=dataset_train,
path_weights=path_weights,
config_overwrite=config_overwrite,
log_dir="/path/to/dir",
build_train_config=build_train_config,
dataset_val=dataset_val,
build_val_config=build_val_config,
metric=coco_metric,
pipeline_component_name="ImageLayoutService"
)
```
## How to fine-tune this model
To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
|
{"license": "apache-2.0", "tags": ["Tensorflow"], "datasets": ["Pubtabnet"]}
|
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_c_inference_only
| null |
[
"Tensorflow",
"dataset:Pubtabnet",
"arxiv:1911.10683",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.10683"
] |
[] |
TAGS
#Tensorflow #dataset-Pubtabnet #arxiv-1911.10683 #license-apache-2.0 #region-us
|
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: Tensorpack .
Regarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation.
The model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before
detecting cells.
The code has been adapted so that it can be used in a deepdoctection pipeline.
## How this model can be used
This model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this model .
## How this model was trained.
To recreate the model run on the deepdoctection framework, run:
## How to fine-tune this model
To fine tune this model, please check this Fine-tune tutorial.
|
[
"# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. \n\nThe model and its training code has been mainly taken from: Tensorpack . \n\nRegarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation. \n\nThe model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before \ndetecting cells.\n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## This is an inference model only\n\nTo reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this model .",
"## How this model was trained. \n\nTo recreate the model run on the deepdoctection framework, run:",
"## How to fine-tune this model\n\nTo fine tune this model, please check this Fine-tune tutorial."
] |
[
"TAGS\n#Tensorflow #dataset-Pubtabnet #arxiv-1911.10683 #license-apache-2.0 #region-us \n",
"# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. \n\nThe model and its training code has been mainly taken from: Tensorpack . \n\nRegarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation. \n\nThe model has been trained on detecting cells from tables. Note, that the datasets contains tables only. Therefore, it is required to perform a table detection task before \ndetecting cells.\n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## This is an inference model only\n\nTo reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this model .",
"## How this model was trained. \n\nTo recreate the model run on the deepdoctection framework, run:",
"## How to fine-tune this model\n\nTo fine tune this model, please check this Fine-tune tutorial."
] |
null | null |
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) .
Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683).
The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are
calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## How this model was trained.
To recreate the model run on the **deep**doctection framework, run:
```python
>>> import os
>>> from deep_doctection.datasets import DatasetRegistry
>>> from deep_doctection.eval import MetricRegistry
>>> from deep_doctection.utils import get_configs_dir_path
>>> from deep_doctection.train import train_faster_rcnn
pubtabnet = DatasetRegistry.get_dataset("pubtabnet")
pubtabnet.dataflow.categories.set_cat_to_sub_cat({"ITEM":"row_col"})
pubtabnet.dataflow.categories.filter_categories(categories=["ROW","COLUMN"])
path_config_yaml=os.path.join(get_configs_dir_path(),"tp/rows/conf_frcnn_rows.yaml")
path_weights = ""
dataset_train = pubtabnet
config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50"]
build_train_config=["max_datapoints=500000","rows_and_cols=True"]
dataset_val = pubtabnet
build_val_config = ["max_datapoints=2000","rows_and_cols=True"]
coco_metric = MetricRegistry.get_metric("coco")
coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]])
train_faster_rcnn(path_config_yaml=path_config_yaml,
dataset_train=dataset_train,
path_weights=path_weights,
config_overwrite=config_overwrite,
log_dir="/path/to/dir",
build_train_config=build_train_config,
dataset_val=dataset_val,
build_val_config=build_val_config,
metric=coco_metric,
pipeline_component_name="ImageLayoutService"
)
```
## How to fine-tune this model
To fine tune this model, please check this [Fine-tune](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Fine_Tune.ipynb) tutorial.
|
{"license": "apache-2.0", "tags": ["Tensorflow"], "datasets": ["Pubtabnet"]}
|
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc
| null |
[
"Tensorflow",
"dataset:Pubtabnet",
"arxiv:1911.10683",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.10683"
] |
[] |
TAGS
#Tensorflow #dataset-Pubtabnet #arxiv-1911.10683 #license-apache-2.0 #region-us
|
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: Tensorpack .
Regarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation.
The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are
calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.
The code has been adapted so that it can be used in a deepdoctection pipeline.
## How this model can be used
This model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.
## How this model was trained.
To recreate the model run on the deepdoctection framework, run:
## How to fine-tune this model
To fine tune this model, please check this Fine-tune tutorial.
|
[
"# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. \r\n\r\nThe model and its training code has been mainly taken from: Tensorpack . \r\n\r\nRegarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation. \r\n\r\nThe model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are\r\ncalculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.\r\n\r\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\r\n\r\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## How this model was trained. \r\n\r\nTo recreate the model run on the deepdoctection framework, run:",
"## How to fine-tune this model\r\n\r\nTo fine tune this model, please check this Fine-tune tutorial."
] |
[
"TAGS\n#Tensorflow #dataset-Pubtabnet #arxiv-1911.10683 #license-apache-2.0 #region-us \n",
"# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. \r\n\r\nThe model and its training code has been mainly taken from: Tensorpack . \r\n\r\nRegarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation. \r\n\r\nThe model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are\r\ncalculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.\r\n\r\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\r\n\r\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## How this model was trained. \r\n\r\nTo recreate the model run on the deepdoctection framework, run:",
"## How to fine-tune this model\r\n\r\nTo fine tune this model, please check this Fine-tune tutorial."
] |
null | null |
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: [Tensorpack](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN) .
Regarding the dataset, please check: [Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation](https://arxiv.org/abs/1911.10683).
The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are
calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.
The code has been adapted so that it can be used in a **deep**doctection pipeline.
## How this model can be used
This model can be used with the **deep**doctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this [Get_started](https://github.com/deepdoctection/deepdoctection/blob/master/notebooks/Get_Started.ipynb) tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this [model](https://huggingface.co/deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc).
## How this model was trained.
To recreate the model run on the **deep**doctection framework, run:
```python
>>> import os
>>> from deep_doctection.datasets import DatasetRegistry
>>> from deep_doctection.eval import MetricRegistry
>>> from deep_doctection.utils import get_configs_dir_path
>>> from deep_doctection.train import train_faster_rcnn
pubtabnet = DatasetRegistry.get_dataset("pubtabnet")
pubtabnet.dataflow.categories.set_cat_to_sub_cat({"ITEM":"row_col"})
pubtabnet.dataflow.categories.filter_categories(categories=["ROW","COLUMN"])
path_config_yaml=os.path.join(get_configs_dir_path(),"tp/rows/conf_frcnn_rows.yaml")
path_weights = ""
dataset_train = pubtabnet
config_overwrite=["TRAIN.STEPS_PER_EPOCH=500","TRAIN.STARTING_EPOCH=1", "TRAIN.CHECKPOINT_PERIOD=50"]
build_train_config=["max_datapoints=500000","rows_and_cols=True"]
dataset_val = pubtabnet
build_val_config = ["max_datapoints=2000","rows_and_cols=True"]
coco_metric = MetricRegistry.get_metric("coco")
coco_metric.set_params(max_detections=[50,200,600], area_range=[[0,1000000],[0,200],[200,800],[800,1000000]])
train_faster_rcnn(path_config_yaml=path_config_yaml,
dataset_train=dataset_train,
path_weights=path_weights,
config_overwrite=config_overwrite,
log_dir="/path/to/dir",
build_train_config=build_train_config,
dataset_val=dataset_val,
build_val_config=build_val_config,
metric=coco_metric,
pipeline_component_name="ImageLayoutService"
)
```
|
{"license": "apache-2.0", "tags": ["Tensorflow"], "datasets": ["Pubtabnet"]}
|
deepdoctection/tp_casc_rcnn_X_32xd4_50_FPN_GN_2FC_pubtabnet_rc_inference_only
| null |
[
"Tensorflow",
"dataset:Pubtabnet",
"arxiv:1911.10683",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.10683"
] |
[] |
TAGS
#Tensorflow #dataset-Pubtabnet #arxiv-1911.10683 #license-apache-2.0 #region-us
|
# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables.
The model and its training code has been mainly taken from: Tensorpack .
Regarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation.
The model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are
calculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.
The code has been adapted so that it can be used in a deepdoctection pipeline.
## How this model can be used
This model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.
## This is an inference model only
To reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this model.
## How this model was trained.
To recreate the model run on the deepdoctection framework, run:
|
[
"# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. \n\nThe model and its training code has been mainly taken from: Tensorpack . \n\nRegarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation. \n\nThe model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are\ncalculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.\n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## This is an inference model only\n\nTo reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this model.",
"## How this model was trained. \n\nTo recreate the model run on the deepdoctection framework, run:"
] |
[
"TAGS\n#Tensorflow #dataset-Pubtabnet #arxiv-1911.10683 #license-apache-2.0 #region-us \n",
"# Tensorpacks Cascade-RCNN with FPN and Group Normalization on ResNext32xd4-50 trained on Pubtabnet for Semantic Segmentation of tables. \n\nThe model and its training code has been mainly taken from: Tensorpack . \n\nRegarding the dataset, please check: Xu Zhong et. all. - Image-based table recognition: data, model, and evaluation. \n\nThe model has been trained on detecting rows and columns for tables. As rows and column bounding boxes are not a priori an element of the annotations they are\ncalculated using the bounding boxes of the cells and the intrinsic structure of the enclosed HTML.\n\nThe code has been adapted so that it can be used in a deepdoctection pipeline.",
"## How this model can be used\n\nThis model can be used with the deepdoctection in a full pipeline, along with table recognition and OCR. Check the general instruction following this Get_started tutorial.",
"## This is an inference model only\n\nTo reduce the size of the checkpoint we removed all variables that are not necessary for inference. Therefore it cannot be used for fine-tuning. To fine tune this model please check this model.",
"## How this model was trained. \n\nTo recreate the model run on the deepdoctection framework, run:"
] |
image-classification
|
transformers
|
# Poster2Plot
An image captioning model to generate movie/t.v show plot from poster. It generates decent plots but is no way perfect. We are still working on improving the model.
## Live demo on Hugging Face Spaces: https://huggingface.co/spaces/deepklarity/poster2plot
# Model Details
The base model uses a Vision Transformer (ViT) model as an image encoder and GPT-2 as a decoder.
We used the following models:
* Encoder: [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k)
* Decoder: [gpt2](https://huggingface.co/gpt2)
# Datasets
Publicly available IMDb datasets were used to train the model.
# How to use
## In PyTorch
```python
import torch
import re
import requests
from PIL import Image
from transformers import AutoTokenizer, AutoFeatureExtractor, VisionEncoderDecoderModel
# Pattern to ignore all the text after 2 or more full stops
regex_pattern = "[.]{2,}"
def post_process(text):
try:
text = text.strip()
text = re.split(regex_pattern, text)[0]
except Exception as e:
print(e)
pass
return text
def predict(image, max_length=64, num_beams=4):
pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
with torch.no_grad():
output_ids = model.generate(
pixel_values,
max_length=max_length,
num_beams=num_beams,
return_dict_in_generate=True,
).sequences
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
pred = post_process(preds[0])
return pred
model_name_or_path = "deepklarity/poster2plot"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Load model.
model = VisionEncoderDecoderModel.from_pretrained(model_name_or_path)
model.to(device)
print("Loaded model")
feature_extractor = AutoFeatureExtractor.from_pretrained(model.encoder.name_or_path)
print("Loaded feature_extractor")
tokenizer = AutoTokenizer.from_pretrained(model.decoder.name_or_path, use_fast=True)
if model.decoder.name_or_path == "gpt2":
tokenizer.pad_token = tokenizer.eos_token
print("Loaded tokenizer")
url = "https://upload.wikimedia.org/wikipedia/en/2/26/Moana_Teaser_Poster.jpg"
with Image.open(requests.get(url, stream=True).raw) as image:
pred = predict(image)
print(pred)
```
|
{"language": "en", "tags": ["image-classification", "image-captioning"]}
|
deepklarity/poster2plot
| null |
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-classification",
"image-captioning",
"en",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #vision-encoder-decoder #image-classification #image-captioning #en #endpoints_compatible #has_space #region-us
|
# Poster2Plot
An image captioning model to generate movie/t.v show plot from poster. It generates decent plots but is no way perfect. We are still working on improving the model.
## Live demo on Hugging Face Spaces: URL
# Model Details
The base model uses a Vision Transformer (ViT) model as an image encoder and GPT-2 as a decoder.
We used the following models:
* Encoder: google/vit-base-patch16-224-in21k
* Decoder: gpt2
# Datasets
Publicly available IMDb datasets were used to train the model.
# How to use
## In PyTorch
|
[
"# Poster2Plot\n\nAn image captioning model to generate movie/t.v show plot from poster. It generates decent plots but is no way perfect. We are still working on improving the model.",
"## Live demo on Hugging Face Spaces: URL",
"# Model Details\n\nThe base model uses a Vision Transformer (ViT) model as an image encoder and GPT-2 as a decoder.\n\nWe used the following models:\n\n* Encoder: google/vit-base-patch16-224-in21k\n* Decoder: gpt2",
"# Datasets\n\nPublicly available IMDb datasets were used to train the model.",
"# How to use",
"## In PyTorch"
] |
[
"TAGS\n#transformers #pytorch #vision-encoder-decoder #image-classification #image-captioning #en #endpoints_compatible #has_space #region-us \n",
"# Poster2Plot\n\nAn image captioning model to generate movie/t.v show plot from poster. It generates decent plots but is no way perfect. We are still working on improving the model.",
"## Live demo on Hugging Face Spaces: URL",
"# Model Details\n\nThe base model uses a Vision Transformer (ViT) model as an image encoder and GPT-2 as a decoder.\n\nWe used the following models:\n\n* Encoder: google/vit-base-patch16-224-in21k\n* Decoder: gpt2",
"# Datasets\n\nPublicly available IMDb datasets were used to train the model.",
"# How to use",
"## In PyTorch"
] |
null | null |
Roberta-base training attempt on hindi datasets.
|
{}
|
deepklarity/roberta-base-hindi
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
Roberta-base training attempt on hindi datasets.
|
[] |
[
"TAGS\n#region-us \n"
] |
fill-mask
|
transformers
|
# Perceiver IO for language
Perceiver IO model pre-trained on the Masked Language Modeling (MLM) task proposed in [BERT](https://arxiv.org/abs/1810.04805) using a large text corpus obtained by combining [English Wikipedia](https://huggingface.co/datasets/wikipedia) and [C4](https://huggingface.co/datasets/c4). It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver).
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For masked language modeling, the output is a tensor containing the prediction scores of the language modeling head, of shape (batch_size, seq_length, vocab_size).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors train the model directly on raw UTF-8 bytes, rather than on subwords as is done in models like BERT, RoBERTa and GPT-2. This has many benefits: one doesn't need to train a tokenizer before training the model, one doesn't need to maintain a (fixed) vocabulary file, and this also doesn't hurt model performance as shown by [Bostrom et al., 2020](https://arxiv.org/abs/2004.03720).
By pre-training the model, it learns an inner representation of language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the Perceiver model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but the model is intended to be fine-tuned on a labeled dataset. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import PerceiverTokenizer, PerceiverForMaskedLM
tokenizer = PerceiverTokenizer.from_pretrained("deepmind/language-perceiver")
model = PerceiverForMaskedLM.from_pretrained("deepmind/language-perceiver")
text = "This is an incomplete sentence where some words are missing."
# prepare input
encoding = tokenizer(text, padding="max_length", return_tensors="pt")
# mask " missing.". Note that the model performs much better if the masked span starts with a space.
encoding.input_ids[0, 52:61] = tokenizer.mask_token_id
inputs, input_mask = encoding.input_ids.to(device), encoding.attention_mask.to(device)
# forward pass
outputs = model(inputs=inputs, attention_mask=input_mask)
logits = outputs.logits
masked_tokens_predictions = logits[0, 51:61].argmax(dim=-1)
print(tokenizer.decode(masked_tokens_predictions))
>>> should print " missing."
```
## Training data
This model was pretrained on a combination of [English Wikipedia](https://huggingface.co/datasets/wikipedia) and [C4](https://huggingface.co/datasets/c4). 70% of the training tokens were sampled from the C4 dataset and the remaining 30% from Wikipedia. The authors concatenate 10 documents before splitting into crops to reduce wasteful computation on padding tokens.
## Training procedure
### Preprocessing
Text preprocessing is trivial: it only involves encoding text into UTF-8 bytes, and padding them up to the same length (2048).
### Pretraining
Hyperparameter details can be found in table 9 of the [paper](https://arxiv.org/abs/2107.14795).
## Evaluation results
This model is able to achieve an average score of 81.8 on GLUE. For more details, we refer to table 3 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2107-14795,
author = {Andrew Jaegle and
Sebastian Borgeaud and
Jean{-}Baptiste Alayrac and
Carl Doersch and
Catalin Ionescu and
David Ding and
Skanda Koppula and
Daniel Zoran and
Andrew Brock and
Evan Shelhamer and
Olivier J. H{\'{e}}naff and
Matthew M. Botvinick and
Andrew Zisserman and
Oriol Vinyals and
Jo{\~{a}}o Carreira},
title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&}
Outputs},
journal = {CoRR},
volume = {abs/2107.14795},
year = {2021},
url = {https://arxiv.org/abs/2107.14795},
eprinttype = {arXiv},
eprint = {2107.14795},
timestamp = {Tue, 03 Aug 2021 14:53:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"language": ["en"], "license": "apache-2.0", "datasets": ["wikipedia", "c4"], "inference": false}
|
deepmind/language-perceiver
| null |
[
"transformers",
"pytorch",
"perceiver",
"fill-mask",
"en",
"dataset:wikipedia",
"dataset:c4",
"arxiv:1810.04805",
"arxiv:2107.14795",
"arxiv:2004.03720",
"license:apache-2.0",
"autotrain_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1810.04805",
"2107.14795",
"2004.03720"
] |
[
"en"
] |
TAGS
#transformers #pytorch #perceiver #fill-mask #en #dataset-wikipedia #dataset-c4 #arxiv-1810.04805 #arxiv-2107.14795 #arxiv-2004.03720 #license-apache-2.0 #autotrain_compatible #has_space #region-us
|
# Perceiver IO for language
Perceiver IO model pre-trained on the Masked Language Modeling (MLM) task proposed in BERT using a large text corpus obtained by combining English Wikipedia and C4. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository.
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For masked language modeling, the output is a tensor containing the prediction scores of the language modeling head, of shape (batch_size, seq_length, vocab_size).
<img src="URL alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors train the model directly on raw UTF-8 bytes, rather than on subwords as is done in models like BERT, RoBERTa and GPT-2. This has many benefits: one doesn't need to train a tokenizer before training the model, one doesn't need to maintain a (fixed) vocabulary file, and this also doesn't hurt model performance as shown by Bostrom et al., 2020.
By pre-training the model, it learns an inner representation of language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the Perceiver model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but the model is intended to be fine-tuned on a labeled dataset. See the model hub to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in PyTorch:
## Training data
This model was pretrained on a combination of English Wikipedia and C4. 70% of the training tokens were sampled from the C4 dataset and the remaining 30% from Wikipedia. The authors concatenate 10 documents before splitting into crops to reduce wasteful computation on padding tokens.
## Training procedure
### Preprocessing
Text preprocessing is trivial: it only involves encoding text into UTF-8 bytes, and padding them up to the same length (2048).
### Pretraining
Hyperparameter details can be found in table 9 of the paper.
## Evaluation results
This model is able to achieve an average score of 81.8 on GLUE. For more details, we refer to table 3 of the original paper.
### BibTeX entry and citation info
|
[
"# Perceiver IO for language\n\nPerceiver IO model pre-trained on the Masked Language Modeling (MLM) task proposed in BERT using a large text corpus obtained by combining English Wikipedia and C4. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository. \n\nDisclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nPerceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. \n\nTo decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For masked language modeling, the output is a tensor containing the prediction scores of the language modeling head, of shape (batch_size, seq_length, vocab_size).\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n<small> Perceiver IO architecture.</small>\n\nAs the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors train the model directly on raw UTF-8 bytes, rather than on subwords as is done in models like BERT, RoBERTa and GPT-2. This has many benefits: one doesn't need to train a tokenizer before training the model, one doesn't need to maintain a (fixed) vocabulary file, and this also doesn't hurt model performance as shown by Bostrom et al., 2020.\n\nBy pre-training the model, it learns an inner representation of language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the Perceiver model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for masked language modeling, but the model is intended to be fine-tuned on a labeled dataset. See the model hub to look for fine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model in PyTorch:",
"## Training data\n\nThis model was pretrained on a combination of English Wikipedia and C4. 70% of the training tokens were sampled from the C4 dataset and the remaining 30% from Wikipedia. The authors concatenate 10 documents before splitting into crops to reduce wasteful computation on padding tokens.",
"## Training procedure",
"### Preprocessing\n\nText preprocessing is trivial: it only involves encoding text into UTF-8 bytes, and padding them up to the same length (2048).",
"### Pretraining\n\nHyperparameter details can be found in table 9 of the paper.",
"## Evaluation results\n\nThis model is able to achieve an average score of 81.8 on GLUE. For more details, we refer to table 3 of the original paper.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #perceiver #fill-mask #en #dataset-wikipedia #dataset-c4 #arxiv-1810.04805 #arxiv-2107.14795 #arxiv-2004.03720 #license-apache-2.0 #autotrain_compatible #has_space #region-us \n",
"# Perceiver IO for language\n\nPerceiver IO model pre-trained on the Masked Language Modeling (MLM) task proposed in BERT using a large text corpus obtained by combining English Wikipedia and C4. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository. \n\nDisclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nPerceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. \n\nTo decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For masked language modeling, the output is a tensor containing the prediction scores of the language modeling head, of shape (batch_size, seq_length, vocab_size).\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n<small> Perceiver IO architecture.</small>\n\nAs the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors train the model directly on raw UTF-8 bytes, rather than on subwords as is done in models like BERT, RoBERTa and GPT-2. This has many benefits: one doesn't need to train a tokenizer before training the model, one doesn't need to maintain a (fixed) vocabulary file, and this also doesn't hurt model performance as shown by Bostrom et al., 2020.\n\nBy pre-training the model, it learns an inner representation of language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the Perceiver model as inputs.",
"## Intended uses & limitations\n\nYou can use the raw model for masked language modeling, but the model is intended to be fine-tuned on a labeled dataset. See the model hub to look for fine-tuned versions on a task that interests you.",
"### How to use\n\nHere is how to use this model in PyTorch:",
"## Training data\n\nThis model was pretrained on a combination of English Wikipedia and C4. 70% of the training tokens were sampled from the C4 dataset and the remaining 30% from Wikipedia. The authors concatenate 10 documents before splitting into crops to reduce wasteful computation on padding tokens.",
"## Training procedure",
"### Preprocessing\n\nText preprocessing is trivial: it only involves encoding text into UTF-8 bytes, and padding them up to the same length (2048).",
"### Pretraining\n\nHyperparameter details can be found in table 9 of the paper.",
"## Evaluation results\n\nThis model is able to achieve an average score of 81.8 on GLUE. For more details, we refer to table 3 of the original paper.",
"### BibTeX entry and citation info"
] |
null |
transformers
|
# Perceiver IO for multimodal autoencoding
Perceiver IO model trained on [Kinetics-700-2020](https://arxiv.org/abs/2010.10864) for auto-encoding videos that consist of images, audio and a class label. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver).
The goal of multimodal autoencoding is to learn a model that can accurately reconstruct multimodal inputs in the presence of a bottleneck induced by an architecture.
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For multimodal autoencoding, the output contains the reconstructions of the 3 modalities: images, audio and the class label.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model by padding the inputs (images, audio, class label) with modality-specific embeddings and serialize all of them into a 2D input array (i.e. concatenate along the time dimension). Decoding the final hidden states of the latents is done by using queries containing Fourier-based position embeddings (for video and audio) and modality embeddings.
## Intended uses & limitations
You can use the raw model for multimodal autoencoding. Note that by masking the class label during evaluation, the auto-encoding model becomes a video classifier.
See the [model hub](https://huggingface.co/models search=deepmind/perceiver) to look for other versions on a task that may interest you.
### How to use
We refer to the [tutorial notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Perceiver/Perceiver_for_Multimodal_Autoencoding.ipynb) regarding using the Perceiver for multimodal autoencoding.
## Training data
This model was trained on [Kinetics-700-200](https://arxiv.org/abs/2010.10864), a dataset consisting of videos that belong to one of 700 classes.
## Training procedure
### Preprocessing
The authors train on 16 frames at 224x224 resolution, preprocessed into 50k 4x4 patches as well as 30k raw audio samples, patched into a total of 1920 16-dimensional vectors and one 700-dimensional one-hot representation of the class label.
### Pretraining
Hyperparameter details can be found in Appendix F of the [paper](https://arxiv.org/abs/2107.14795).
## Evaluation results
For evaluation results, we refer to table 5 of the [paper](https://arxiv.org/abs/2107.14795).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2107-14795,
author = {Andrew Jaegle and
Sebastian Borgeaud and
Jean{-}Baptiste Alayrac and
Carl Doersch and
Catalin Ionescu and
David Ding and
Skanda Koppula and
Daniel Zoran and
Andrew Brock and
Evan Shelhamer and
Olivier J. H{\'{e}}naff and
Matthew M. Botvinick and
Andrew Zisserman and
Oriol Vinyals and
Jo{\~{a}}o Carreira},
title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&}
Outputs},
journal = {CoRR},
volume = {abs/2107.14795},
year = {2021},
url = {https://arxiv.org/abs/2107.14795},
eprinttype = {arXiv},
eprint = {2107.14795},
timestamp = {Tue, 03 Aug 2021 14:53:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"license": "apache-2.0", "datasets": ["kinetics-700-2020"]}
|
deepmind/multimodal-perceiver
| null |
[
"transformers",
"pytorch",
"perceiver",
"dataset:kinetics-700-2020",
"arxiv:2010.10864",
"arxiv:2107.14795",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.10864",
"2107.14795"
] |
[] |
TAGS
#transformers #pytorch #perceiver #dataset-kinetics-700-2020 #arxiv-2010.10864 #arxiv-2107.14795 #license-apache-2.0 #endpoints_compatible #region-us
|
# Perceiver IO for multimodal autoencoding
Perceiver IO model trained on Kinetics-700-2020 for auto-encoding videos that consist of images, audio and a class label. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository.
The goal of multimodal autoencoding is to learn a model that can accurately reconstruct multimodal inputs in the presence of a bottleneck induced by an architecture.
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For multimodal autoencoding, the output contains the reconstructions of the 3 modalities: images, audio and the class label.
<img src="URL alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model by padding the inputs (images, audio, class label) with modality-specific embeddings and serialize all of them into a 2D input array (i.e. concatenate along the time dimension). Decoding the final hidden states of the latents is done by using queries containing Fourier-based position embeddings (for video and audio) and modality embeddings.
## Intended uses & limitations
You can use the raw model for multimodal autoencoding. Note that by masking the class label during evaluation, the auto-encoding model becomes a video classifier.
See the model hub to look for other versions on a task that may interest you.
### How to use
We refer to the tutorial notebook regarding using the Perceiver for multimodal autoencoding.
## Training data
This model was trained on Kinetics-700-200, a dataset consisting of videos that belong to one of 700 classes.
## Training procedure
### Preprocessing
The authors train on 16 frames at 224x224 resolution, preprocessed into 50k 4x4 patches as well as 30k raw audio samples, patched into a total of 1920 16-dimensional vectors and one 700-dimensional one-hot representation of the class label.
### Pretraining
Hyperparameter details can be found in Appendix F of the paper.
## Evaluation results
For evaluation results, we refer to table 5 of the paper.
### BibTeX entry and citation info
|
[
"# Perceiver IO for multimodal autoencoding\n\nPerceiver IO model trained on Kinetics-700-2020 for auto-encoding videos that consist of images, audio and a class label. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository. \n\nThe goal of multimodal autoencoding is to learn a model that can accurately reconstruct multimodal inputs in the presence of a bottleneck induced by an architecture.\n\nDisclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nPerceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. \n\nTo decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For multimodal autoencoding, the output contains the reconstructions of the 3 modalities: images, audio and the class label.\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n<small> Perceiver IO architecture.</small>\n\nAs the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model by padding the inputs (images, audio, class label) with modality-specific embeddings and serialize all of them into a 2D input array (i.e. concatenate along the time dimension). Decoding the final hidden states of the latents is done by using queries containing Fourier-based position embeddings (for video and audio) and modality embeddings.",
"## Intended uses & limitations\n\nYou can use the raw model for multimodal autoencoding. Note that by masking the class label during evaluation, the auto-encoding model becomes a video classifier.\n\nSee the model hub to look for other versions on a task that may interest you.",
"### How to use\n\nWe refer to the tutorial notebook regarding using the Perceiver for multimodal autoencoding.",
"## Training data\n\nThis model was trained on Kinetics-700-200, a dataset consisting of videos that belong to one of 700 classes.",
"## Training procedure",
"### Preprocessing\n\nThe authors train on 16 frames at 224x224 resolution, preprocessed into 50k 4x4 patches as well as 30k raw audio samples, patched into a total of 1920 16-dimensional vectors and one 700-dimensional one-hot representation of the class label.",
"### Pretraining\n\nHyperparameter details can be found in Appendix F of the paper.",
"## Evaluation results\n\nFor evaluation results, we refer to table 5 of the paper.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #perceiver #dataset-kinetics-700-2020 #arxiv-2010.10864 #arxiv-2107.14795 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Perceiver IO for multimodal autoencoding\n\nPerceiver IO model trained on Kinetics-700-2020 for auto-encoding videos that consist of images, audio and a class label. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository. \n\nThe goal of multimodal autoencoding is to learn a model that can accurately reconstruct multimodal inputs in the presence of a bottleneck induced by an architecture.\n\nDisclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nPerceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. \n\nTo decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For multimodal autoencoding, the output contains the reconstructions of the 3 modalities: images, audio and the class label.\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n<small> Perceiver IO architecture.</small>\n\nAs the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model by padding the inputs (images, audio, class label) with modality-specific embeddings and serialize all of them into a 2D input array (i.e. concatenate along the time dimension). Decoding the final hidden states of the latents is done by using queries containing Fourier-based position embeddings (for video and audio) and modality embeddings.",
"## Intended uses & limitations\n\nYou can use the raw model for multimodal autoencoding. Note that by masking the class label during evaluation, the auto-encoding model becomes a video classifier.\n\nSee the model hub to look for other versions on a task that may interest you.",
"### How to use\n\nWe refer to the tutorial notebook regarding using the Perceiver for multimodal autoencoding.",
"## Training data\n\nThis model was trained on Kinetics-700-200, a dataset consisting of videos that belong to one of 700 classes.",
"## Training procedure",
"### Preprocessing\n\nThe authors train on 16 frames at 224x224 resolution, preprocessed into 50k 4x4 patches as well as 30k raw audio samples, patched into a total of 1920 16-dimensional vectors and one 700-dimensional one-hot representation of the class label.",
"### Pretraining\n\nHyperparameter details can be found in Appendix F of the paper.",
"## Evaluation results\n\nFor evaluation results, we refer to table 5 of the paper.",
"### BibTeX entry and citation info"
] |
null |
transformers
|
# Perceiver IO for optical flow
Perceiver IO model trained on [AutoFlow](https://autoflow-google.github.io/). It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver).
Optical flow is a decades-old open problem in computer vision. Given two images of the same scene (e.g. two consecutive frames of a video), the task is to estimate the 2D displacement for each pixel in the first image. This has many broader applications, such as navigation and visual odometry in robots, estimation of 3D geometry, and even to aid transfer of more complex, learned inference such as 3D human pose estimation from synthetic to real images.
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For optical flow, the output is a tensor containing the predicted flow of shape (batch_size, height, width, 2).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model on raw pixel values, by concatenating a pair of images and extracting a 3x3 patch around each pixel.
The model obtains state-of-the-art results on important optical flow benchmarks, including [Sintel](http://sintel.is.tue.mpg.de/) and [KITTI](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=flow).
## Intended uses & limitations
You can use the raw model for predicting optical flow between a pair of images. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other versions on a task that may interest you.
### How to use
We refer to the [tutorial notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Perceiver/Perceiver_for_Optical_Flow.ipynb) regarding using the Perceiver for optical flow.
## Training data
This model was trained on [AutoFlow](https://autoflow-google.github.io/), a synthetic dataset consisting of 400,000 annotated image pairs.
## Training procedure
### Preprocessing
Frames are resized to a resolution of 368x496. The authors concatenate the frames along the channel dimension and extract a 3x3 patch around each pixel (leading to 3x3x3x2 = 54 values for each pixel).
### Pretraining
Hyperparameter details can be found in Appendix E of the [paper](https://arxiv.org/abs/2107.14795).
## Evaluation results
The model achieves a average end-point error (EPE) of 1.81 on Sintel.clean, 2.42 on Sintel.final and 4.98 on KITTI. For evaluation results, we refer to table 4 of the [paper](https://arxiv.org/abs/2107.14795).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2107-14795,
author = {Andrew Jaegle and
Sebastian Borgeaud and
Jean{-}Baptiste Alayrac and
Carl Doersch and
Catalin Ionescu and
David Ding and
Skanda Koppula and
Daniel Zoran and
Andrew Brock and
Evan Shelhamer and
Olivier J. H{\'{e}}naff and
Matthew M. Botvinick and
Andrew Zisserman and
Oriol Vinyals and
Jo{\~{a}}o Carreira},
title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&}
Outputs},
journal = {CoRR},
volume = {abs/2107.14795},
year = {2021},
url = {https://arxiv.org/abs/2107.14795},
eprinttype = {arXiv},
eprint = {2107.14795},
timestamp = {Tue, 03 Aug 2021 14:53:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"license": "apache-2.0", "datasets": ["autoflow"]}
|
deepmind/optical-flow-perceiver
| null |
[
"transformers",
"pytorch",
"perceiver",
"dataset:autoflow",
"arxiv:2107.14795",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2107.14795"
] |
[] |
TAGS
#transformers #pytorch #perceiver #dataset-autoflow #arxiv-2107.14795 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# Perceiver IO for optical flow
Perceiver IO model trained on AutoFlow. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository.
Optical flow is a decades-old open problem in computer vision. Given two images of the same scene (e.g. two consecutive frames of a video), the task is to estimate the 2D displacement for each pixel in the first image. This has many broader applications, such as navigation and visual odometry in robots, estimation of 3D geometry, and even to aid transfer of more complex, learned inference such as 3D human pose estimation from synthetic to real images.
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For optical flow, the output is a tensor containing the predicted flow of shape (batch_size, height, width, 2).
<img src="URL alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model on raw pixel values, by concatenating a pair of images and extracting a 3x3 patch around each pixel.
The model obtains state-of-the-art results on important optical flow benchmarks, including Sintel and KITTI.
## Intended uses & limitations
You can use the raw model for predicting optical flow between a pair of images. See the model hub to look for other versions on a task that may interest you.
### How to use
We refer to the tutorial notebook regarding using the Perceiver for optical flow.
## Training data
This model was trained on AutoFlow, a synthetic dataset consisting of 400,000 annotated image pairs.
## Training procedure
### Preprocessing
Frames are resized to a resolution of 368x496. The authors concatenate the frames along the channel dimension and extract a 3x3 patch around each pixel (leading to 3x3x3x2 = 54 values for each pixel).
### Pretraining
Hyperparameter details can be found in Appendix E of the paper.
## Evaluation results
The model achieves a average end-point error (EPE) of 1.81 on URL, 2.42 on URL and 4.98 on KITTI. For evaluation results, we refer to table 4 of the paper.
### BibTeX entry and citation info
|
[
"# Perceiver IO for optical flow\n\nPerceiver IO model trained on AutoFlow. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository. \n\nOptical flow is a decades-old open problem in computer vision. Given two images of the same scene (e.g. two consecutive frames of a video), the task is to estimate the 2D displacement for each pixel in the first image. This has many broader applications, such as navigation and visual odometry in robots, estimation of 3D geometry, and even to aid transfer of more complex, learned inference such as 3D human pose estimation from synthetic to real images.\n\nDisclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nPerceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. \n\nTo decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For optical flow, the output is a tensor containing the predicted flow of shape (batch_size, height, width, 2).\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n<small> Perceiver IO architecture.</small>\n\nAs the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model on raw pixel values, by concatenating a pair of images and extracting a 3x3 patch around each pixel. \n\nThe model obtains state-of-the-art results on important optical flow benchmarks, including Sintel and KITTI.",
"## Intended uses & limitations\n\nYou can use the raw model for predicting optical flow between a pair of images. See the model hub to look for other versions on a task that may interest you.",
"### How to use\n\nWe refer to the tutorial notebook regarding using the Perceiver for optical flow.",
"## Training data\n\nThis model was trained on AutoFlow, a synthetic dataset consisting of 400,000 annotated image pairs.",
"## Training procedure",
"### Preprocessing\n\nFrames are resized to a resolution of 368x496. The authors concatenate the frames along the channel dimension and extract a 3x3 patch around each pixel (leading to 3x3x3x2 = 54 values for each pixel).",
"### Pretraining\n\nHyperparameter details can be found in Appendix E of the paper.",
"## Evaluation results\n\nThe model achieves a average end-point error (EPE) of 1.81 on URL, 2.42 on URL and 4.98 on KITTI. For evaluation results, we refer to table 4 of the paper.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #perceiver #dataset-autoflow #arxiv-2107.14795 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# Perceiver IO for optical flow\n\nPerceiver IO model trained on AutoFlow. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository. \n\nOptical flow is a decades-old open problem in computer vision. Given two images of the same scene (e.g. two consecutive frames of a video), the task is to estimate the 2D displacement for each pixel in the first image. This has many broader applications, such as navigation and visual odometry in robots, estimation of 3D geometry, and even to aid transfer of more complex, learned inference such as 3D human pose estimation from synthetic to real images.\n\nDisclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nPerceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. \n\nTo decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For optical flow, the output is a tensor containing the predicted flow of shape (batch_size, height, width, 2).\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n<small> Perceiver IO architecture.</small>\n\nAs the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model on raw pixel values, by concatenating a pair of images and extracting a 3x3 patch around each pixel. \n\nThe model obtains state-of-the-art results on important optical flow benchmarks, including Sintel and KITTI.",
"## Intended uses & limitations\n\nYou can use the raw model for predicting optical flow between a pair of images. See the model hub to look for other versions on a task that may interest you.",
"### How to use\n\nWe refer to the tutorial notebook regarding using the Perceiver for optical flow.",
"## Training data\n\nThis model was trained on AutoFlow, a synthetic dataset consisting of 400,000 annotated image pairs.",
"## Training procedure",
"### Preprocessing\n\nFrames are resized to a resolution of 368x496. The authors concatenate the frames along the channel dimension and extract a 3x3 patch around each pixel (leading to 3x3x3x2 = 54 values for each pixel).",
"### Pretraining\n\nHyperparameter details can be found in Appendix E of the paper.",
"## Evaluation results\n\nThe model achieves a average end-point error (EPE) of 1.81 on URL, 2.42 on URL and 4.98 on KITTI. For evaluation results, we refer to table 4 of the paper.",
"### BibTeX entry and citation info"
] |
image-classification
|
transformers
|
# Perceiver IO for vision (convolutional processing)
Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver).
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model employs a simple 2D conv+maxpool preprocessing network on the pixel values, before using the inputs for cross-attention with the latents.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other fine-tuned versions on a task that may interest you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationConvProcessing
import requests
from PIL import Image
feature_extractor = PerceiverFeatureExtractor.from_pretrained("deepmind/vision-perceiver-conv")
model = PerceiverForImageClassificationConvProcessing.from_pretrained("deepmind/vision-perceiver-conv")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# prepare input
inputs = feature_extractor(image, return_tensors="pt").pixel_values
# forward pass
outputs = model(inputs)
logits = outputs.logits
print("Predicted class:", model.config.id2label[logits.argmax(-1).item()])
>>> should print Predicted class: tabby, tabby cat
```
## Training data
This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 million images and 1k classes.
## Training procedure
### Preprocessing
Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
### Pretraining
Hyperparameter details can be found in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
## Evaluation results
This model is able to achieve a top-1 accuracy of 82.1 on ImageNet-1k.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2107-14795,
author = {Andrew Jaegle and
Sebastian Borgeaud and
Jean{-}Baptiste Alayrac and
Carl Doersch and
Catalin Ionescu and
David Ding and
Skanda Koppula and
Daniel Zoran and
Andrew Brock and
Evan Shelhamer and
Olivier J. H{\'{e}}naff and
Matthew M. Botvinick and
Andrew Zisserman and
Oriol Vinyals and
Jo{\~{a}}o Carreira},
title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&}
Outputs},
journal = {CoRR},
volume = {abs/2107.14795},
year = {2021},
url = {https://arxiv.org/abs/2107.14795},
eprinttype = {arXiv},
eprint = {2107.14795},
timestamp = {Tue, 03 Aug 2021 14:53:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"license": "apache-2.0", "datasets": ["imagenet"]}
|
deepmind/vision-perceiver-conv
| null |
[
"transformers",
"pytorch",
"perceiver",
"image-classification",
"dataset:imagenet",
"arxiv:2107.14795",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2107.14795"
] |
[] |
TAGS
#transformers #pytorch #perceiver #image-classification #dataset-imagenet #arxiv-2107.14795 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Perceiver IO for vision (convolutional processing)
Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository.
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).
<img src="URL alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model employs a simple 2D conv+maxpool preprocessing network on the pixel values, before using the inputs for cross-attention with the latents.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.
## Intended uses & limitations
You can use the raw model for image classification. See the model hub to look for other fine-tuned versions on a task that may interest you.
### How to use
Here is how to use this model in PyTorch:
## Training data
This model was pretrained on ImageNet, a dataset consisting of 14 million images and 1k classes.
## Training procedure
### Preprocessing
Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the paper.
### Pretraining
Hyperparameter details can be found in Appendix H of the paper.
## Evaluation results
This model is able to achieve a top-1 accuracy of 82.1 on ImageNet-1k.
### BibTeX entry and citation info
|
[
"# Perceiver IO for vision (convolutional processing)\n\nPerceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository. \n\nDisclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nPerceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. \n\nTo decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n<small> Perceiver IO architecture.</small>\n\nAs the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model employs a simple 2D conv+maxpool preprocessing network on the pixel values, before using the inputs for cross-attention with the latents.\n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.",
"## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for other fine-tuned versions on a task that may interest you.",
"### How to use\n\nHere is how to use this model in PyTorch:",
"## Training data\n\nThis model was pretrained on ImageNet, a dataset consisting of 14 million images and 1k classes.",
"## Training procedure",
"### Preprocessing\n\nImages are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the paper.",
"### Pretraining\n\nHyperparameter details can be found in Appendix H of the paper.",
"## Evaluation results\n\nThis model is able to achieve a top-1 accuracy of 82.1 on ImageNet-1k.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #perceiver #image-classification #dataset-imagenet #arxiv-2107.14795 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Perceiver IO for vision (convolutional processing)\n\nPerceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository. \n\nDisclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nPerceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. \n\nTo decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n<small> Perceiver IO architecture.</small>\n\nAs the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model employs a simple 2D conv+maxpool preprocessing network on the pixel values, before using the inputs for cross-attention with the latents.\n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.",
"## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for other fine-tuned versions on a task that may interest you.",
"### How to use\n\nHere is how to use this model in PyTorch:",
"## Training data\n\nThis model was pretrained on ImageNet, a dataset consisting of 14 million images and 1k classes.",
"## Training procedure",
"### Preprocessing\n\nImages are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the paper.",
"### Pretraining\n\nHyperparameter details can be found in Appendix H of the paper.",
"## Evaluation results\n\nThis model is able to achieve a top-1 accuracy of 82.1 on ImageNet-1k.",
"### BibTeX entry and citation info"
] |
image-classification
|
transformers
|
# Perceiver IO for vision (fixed Fourier position embeddings)
Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver).
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds fixed Fourier 2D position embeddings to the pixel values.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other fine-tuned versions on a task that may interest you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import PerceiverImageProcessor, PerceiverForImageClassificationFourier
import requests
from PIL import Image
processor = PerceiverImageProcessor.from_pretrained("deepmind/vision-perceiver-fourier")
model = PerceiverForImageClassificationFourier.from_pretrained("deepmind/vision-perceiver-fourier")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# prepare input
inputs = processor(image, return_tensors="pt").pixel_values
# forward pass
outputs = model(inputs)
logits = outputs.logits
print("Predicted class:", model.config.id2label[logits.argmax(-1).item()])
>>> should print Predicted class: tabby, tabby cat
```
## Training data
This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 million images and 1k classes.
## Training procedure
### Preprocessing
Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
### Pretraining
Hyperparameter details can be found in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
## Evaluation results
This model is able to achieve a top-1 accuracy of 79.0 on ImageNet-1k, and 84.5 when pre-trained on a large-scale dataset (JFT-300M, an internal dataset of Google).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2107-14795,
author = {Andrew Jaegle and
Sebastian Borgeaud and
Jean{-}Baptiste Alayrac and
Carl Doersch and
Catalin Ionescu and
David Ding and
Skanda Koppula and
Daniel Zoran and
Andrew Brock and
Evan Shelhamer and
Olivier J. H{\'{e}}naff and
Matthew M. Botvinick and
Andrew Zisserman and
Oriol Vinyals and
Jo{\~{a}}o Carreira},
title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&}
Outputs},
journal = {CoRR},
volume = {abs/2107.14795},
year = {2021},
url = {https://arxiv.org/abs/2107.14795},
eprinttype = {arXiv},
eprint = {2107.14795},
timestamp = {Tue, 03 Aug 2021 14:53:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"license": "apache-2.0", "datasets": ["imagenet"]}
|
deepmind/vision-perceiver-fourier
| null |
[
"transformers",
"pytorch",
"perceiver",
"image-classification",
"dataset:imagenet",
"arxiv:2107.14795",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2107.14795"
] |
[] |
TAGS
#transformers #pytorch #perceiver #image-classification #dataset-imagenet #arxiv-2107.14795 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Perceiver IO for vision (fixed Fourier position embeddings)
Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository.
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).
<img src="URL alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds fixed Fourier 2D position embeddings to the pixel values.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.
## Intended uses & limitations
You can use the raw model for image classification. See the model hub to look for other fine-tuned versions on a task that may interest you.
### How to use
Here is how to use this model in PyTorch:
## Training data
This model was pretrained on ImageNet, a dataset consisting of 14 million images and 1k classes.
## Training procedure
### Preprocessing
Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the paper.
### Pretraining
Hyperparameter details can be found in Appendix H of the paper.
## Evaluation results
This model is able to achieve a top-1 accuracy of 79.0 on ImageNet-1k, and 84.5 when pre-trained on a large-scale dataset (JFT-300M, an internal dataset of Google).
### BibTeX entry and citation info
|
[
"# Perceiver IO for vision (fixed Fourier position embeddings)\n\nPerceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository. \n\nDisclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nPerceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. \n\nTo decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n<small> Perceiver IO architecture.</small>\n\nAs the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds fixed Fourier 2D position embeddings to the pixel values.\n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.",
"## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for other fine-tuned versions on a task that may interest you.",
"### How to use\n\nHere is how to use this model in PyTorch:",
"## Training data\n\nThis model was pretrained on ImageNet, a dataset consisting of 14 million images and 1k classes.",
"## Training procedure",
"### Preprocessing\n\nImages are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the paper.",
"### Pretraining\n\nHyperparameter details can be found in Appendix H of the paper.",
"## Evaluation results\n\nThis model is able to achieve a top-1 accuracy of 79.0 on ImageNet-1k, and 84.5 when pre-trained on a large-scale dataset (JFT-300M, an internal dataset of Google).",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #perceiver #image-classification #dataset-imagenet #arxiv-2107.14795 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Perceiver IO for vision (fixed Fourier position embeddings)\n\nPerceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository. \n\nDisclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nPerceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. \n\nTo decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n<small> Perceiver IO architecture.</small>\n\nAs the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds fixed Fourier 2D position embeddings to the pixel values.\n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.",
"## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for other fine-tuned versions on a task that may interest you.",
"### How to use\n\nHere is how to use this model in PyTorch:",
"## Training data\n\nThis model was pretrained on ImageNet, a dataset consisting of 14 million images and 1k classes.",
"## Training procedure",
"### Preprocessing\n\nImages are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the paper.",
"### Pretraining\n\nHyperparameter details can be found in Appendix H of the paper.",
"## Evaluation results\n\nThis model is able to achieve a top-1 accuracy of 79.0 on ImageNet-1k, and 84.5 when pre-trained on a large-scale dataset (JFT-300M, an internal dataset of Google).",
"### BibTeX entry and citation info"
] |
image-classification
|
transformers
|
# Perceiver IO for vision (learned position embeddings)
Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver).
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds learned 1D position embeddings to the pixel values, hence it is given no privileged information about the 2D structure of images.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other fine-tuned versions on a task that may interest you.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationLearned
import requests
from PIL import Image
feature_extractor = PerceiverFeatureExtractor.from_pretrained("deepmind/vision-perceiver-learned")
model = PerceiverForImageClassificationLearned.from_pretrained("deepmind/vision-perceiver-learned")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# prepare input
encoding = feature_extractor(image, return_tensors="pt")
inputs = encoding.pixel_values
# forward pass
outputs = model(inputs)
logits = outputs.logits
print("Predicted class:", model.config.id2label[logits.argmax(-1).item()])
>>> should print Predicted class: tabby, tabby cat
```
## Training data
This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 million images and 1k classes.
## Training procedure
### Preprocessing
Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
### Pretraining
Hyperparameter details can be found in Appendix H of the [paper](https://arxiv.org/abs/2107.14795).
## Evaluation results
This model is able to achieve a top-1 accuracy of 72.7 on ImageNet-1k, despite having no privileged information about the 2D structure of images.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2107-14795,
author = {Andrew Jaegle and
Sebastian Borgeaud and
Jean{-}Baptiste Alayrac and
Carl Doersch and
Catalin Ionescu and
David Ding and
Skanda Koppula and
Daniel Zoran and
Andrew Brock and
Evan Shelhamer and
Olivier J. H{\'{e}}naff and
Matthew M. Botvinick and
Andrew Zisserman and
Oriol Vinyals and
Jo{\~{a}}o Carreira},
title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&}
Outputs},
journal = {CoRR},
volume = {abs/2107.14795},
year = {2021},
url = {https://arxiv.org/abs/2107.14795},
eprinttype = {arXiv},
eprint = {2107.14795},
timestamp = {Tue, 03 Aug 2021 14:53:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
{"license": "apache-2.0", "datasets": ["imagenet"]}
|
deepmind/vision-perceiver-learned
| null |
[
"transformers",
"pytorch",
"perceiver",
"image-classification",
"dataset:imagenet",
"arxiv:2107.14795",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2107.14795"
] |
[] |
TAGS
#transformers #pytorch #perceiver #image-classification #dataset-imagenet #arxiv-2107.14795 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Perceiver IO for vision (learned position embeddings)
Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository.
Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs.
To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).
<img src="URL alt="drawing" width="600"/>
<small> Perceiver IO architecture.</small>
As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds learned 1D position embeddings to the pixel values, hence it is given no privileged information about the 2D structure of images.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.
## Intended uses & limitations
You can use the raw model for image classification. See the model hub to look for other fine-tuned versions on a task that may interest you.
### How to use
Here is how to use this model in PyTorch:
## Training data
This model was pretrained on ImageNet, a dataset consisting of 14 million images and 1k classes.
## Training procedure
### Preprocessing
Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the paper.
### Pretraining
Hyperparameter details can be found in Appendix H of the paper.
## Evaluation results
This model is able to achieve a top-1 accuracy of 72.7 on ImageNet-1k, despite having no privileged information about the 2D structure of images.
### BibTeX entry and citation info
|
[
"# Perceiver IO for vision (learned position embeddings)\n\nPerceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository. \n\nDisclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nPerceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. \n\nTo decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n<small> Perceiver IO architecture.</small>\n\nAs the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds learned 1D position embeddings to the pixel values, hence it is given no privileged information about the 2D structure of images.\n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.",
"## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for other fine-tuned versions on a task that may interest you.",
"### How to use\n\nHere is how to use this model in PyTorch:",
"## Training data\n\nThis model was pretrained on ImageNet, a dataset consisting of 14 million images and 1k classes.",
"## Training procedure",
"### Preprocessing\n\nImages are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the paper.",
"### Pretraining\n\nHyperparameter details can be found in Appendix H of the paper.",
"## Evaluation results\n\nThis model is able to achieve a top-1 accuracy of 72.7 on ImageNet-1k, despite having no privileged information about the 2D structure of images.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #perceiver #image-classification #dataset-imagenet #arxiv-2107.14795 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Perceiver IO for vision (learned position embeddings)\n\nPerceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper Perceiver IO: A General Architecture for Structured Inputs & Outputs by Jaegle et al. and first released in this repository. \n\nDisclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team.",
"## Model description\n\nPerceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. \n\nTo decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels).\n\n<img src=\"URL alt=\"drawing\" width=\"600\"/>\n\n<small> Perceiver IO architecture.</small>\n\nAs the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds learned 1D position embeddings to the pixel values, hence it is given no privileged information about the 2D structure of images.\n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder.",
"## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for other fine-tuned versions on a task that may interest you.",
"### How to use\n\nHere is how to use this model in PyTorch:",
"## Training data\n\nThis model was pretrained on ImageNet, a dataset consisting of 14 million images and 1k classes.",
"## Training procedure",
"### Preprocessing\n\nImages are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the paper.",
"### Pretraining\n\nHyperparameter details can be found in Appendix H of the paper.",
"## Evaluation results\n\nThis model is able to achieve a top-1 accuracy of 72.7 on ImageNet-1k, despite having no privileged information about the 2D structure of images.",
"### BibTeX entry and citation info"
] |
text-generation
|
transformers
|
# Aeona | Chatbot

An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small).
Recommended to use along with an [AIML Chatbot](https://github.com/deepsarda/Aeona-Aiml) to reduce load, get better replies, add name and personality to your bot.
Using an AIML Chatbot will allow you to hardcode some replies also.
# AEONA
Aeona is an chatbot which hope's to be able to talk with humans as if its an friend!
It's main target platform is discord.
You can invite the bot [here](https://aeona.xyz).
To learn more about this project and chat with the ai, you can use this [website](https://aeona.xyz/).
Aeona works why using context of the previous messages and guessing the personality of the human who is talking with it and adapting its own personality to better talk with the user.
# Participate and Help the AI improve or just hang out at [hugging face discussions](https://huggingface.co/deepparag/Aeona/discussions)
## Goals
The goal is to create an AI which will work with AIML in order to create the most human like AI.
#### Why not an AI on its own?
For AI it is not possible (realistically) to learn about the user and store data on them, when compared to an AIML which can even execute code!
The goal of the AI is to generate responses where the AIML fails.
Hence the goals becomes to make an AI which has a wide variety of knowledge, yet be as small as possible!
So we use 3 dataset:-
1. [Movielines](https://www.kaggle.com/Cornell-University/movie-dialog-corpus) The movie lines promote longer and more thought out responses but it can be very random. About 200k lines!
2. [Discord Messages](https://www.kaggle.com/jef1056/discord-data) The messages are on a wide variety of topics filtered and removed spam which makes the AI highly random but gives it a very random response to every days questions! about 120 million messages!
3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time!
## Training
The Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated.
This leads to them covering each others issues!
The AI has a context of 6 messages which means it will reply until the 4th message from user.
[Example](https://huggingface.co/deepparag/Aeona-Beta/discussions/1)
## Tips for Hugging Face interference
I recommend send the user input,
previous 3 AI and human responses.
Using more context than this will lead to useless responses but using less is alright but the responses may be random.
## Evaluation
Below is a comparison of Aeona vs. other baselines on the mixed dataset given above using automatic evaluation metrics.
| Model | Perplexity |
|---|---|
| Seq2seq Baseline [3] | 29.8 |
| Wolf et al. [5] | 16.3 |
| GPT-2 baseline | 99.5 |
| DialoGPT baseline | 56.6 |
| DialoGPT finetuned | 11.4 |
| PersonaGPT | 10.2 |
| **Aeona** | **7.9** |
## Usage
Example:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("deepparag/Aeona")
model = AutoModelWithLMHead.from_pretrained("deepparag/Aeona")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=4,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("Aeona: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
{"license": "mit", "tags": ["conversational"], "datasets": ["blended_skill_talk"], "metrics": ["accuracy", "f1", "perplexity"], "thumbnail": "https://images-ext-2.discordapp.net/external/Wvtx1L98EbA7DR2lpZPbDxDuO4qmKt03nZygATZtXgk/%3Fsize%3D4096/https/cdn.discordapp.com/avatars/931226824753700934/338a9e413bbceaeb9095a29e97d4fac0.png", "pipeline_tag": "conversational"}
|
deepparag/Aeona
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"dataset:blended_skill_talk",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #dataset-blended_skill_talk #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Aeona | Chatbot
===============
!Aeona Banner
An generative AI made using microsoft/DialoGPT-small.
Recommended to use along with an AIML Chatbot to reduce load, get better replies, add name and personality to your bot.
Using an AIML Chatbot will allow you to hardcode some replies also.
AEONA
=====
Aeona is an chatbot which hope's to be able to talk with humans as if its an friend!
It's main target platform is discord.
You can invite the bot here.
To learn more about this project and chat with the ai, you can use this website.
Aeona works why using context of the previous messages and guessing the personality of the human who is talking with it and adapting its own personality to better talk with the user.
Participate and Help the AI improve or just hang out at hugging face discussions
================================================================================
Goals
-----
The goal is to create an AI which will work with AIML in order to create the most human like AI.
#### Why not an AI on its own?
For AI it is not possible (realistically) to learn about the user and store data on them, when compared to an AIML which can even execute code!
The goal of the AI is to generate responses where the AIML fails.
Hence the goals becomes to make an AI which has a wide variety of knowledge, yet be as small as possible!
So we use 3 dataset:-
1. Movielines The movie lines promote longer and more thought out responses but it can be very random. About 200k lines!
2. Discord Messages The messages are on a wide variety of topics filtered and removed spam which makes the AI highly random but gives it a very random response to every days questions! about 120 million messages!
3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time!
Training
--------
The Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated.
This leads to them covering each others issues!
The AI has a context of 6 messages which means it will reply until the 4th message from user.
Example
Tips for Hugging Face interference
----------------------------------
```
I recommend send the user input,
previous 3 AI and human responses.
Using more context than this will lead to useless responses but using less is alright but the responses may be random.
```
Evaluation
----------
Below is a comparison of Aeona vs. other baselines on the mixed dataset given above using automatic evaluation metrics.
Usage
-----
Example:
|
[
"#### Why not an AI on its own?\n\n\nFor AI it is not possible (realistically) to learn about the user and store data on them, when compared to an AIML which can even execute code!\nThe goal of the AI is to generate responses where the AIML fails.\n\n\nHence the goals becomes to make an AI which has a wide variety of knowledge, yet be as small as possible!\nSo we use 3 dataset:-\n\n\n1. Movielines The movie lines promote longer and more thought out responses but it can be very random. About 200k lines!\n2. Discord Messages The messages are on a wide variety of topics filtered and removed spam which makes the AI highly random but gives it a very random response to every days questions! about 120 million messages!\n3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time!\n\n\nTraining\n--------\n\n\nThe Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated.\nThis leads to them covering each others issues!\n\n\nThe AI has a context of 6 messages which means it will reply until the 4th message from user.\nExample\n\n\nTips for Hugging Face interference\n----------------------------------\n\n\n\n```\nI recommend send the user input,\nprevious 3 AI and human responses.\n\nUsing more context than this will lead to useless responses but using less is alright but the responses may be random. \n\n```\n\nEvaluation\n----------\n\n\nBelow is a comparison of Aeona vs. other baselines on the mixed dataset given above using automatic evaluation metrics.\n\n\n\nUsage\n-----\n\n\nExample:"
] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #dataset-blended_skill_talk #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"#### Why not an AI on its own?\n\n\nFor AI it is not possible (realistically) to learn about the user and store data on them, when compared to an AIML which can even execute code!\nThe goal of the AI is to generate responses where the AIML fails.\n\n\nHence the goals becomes to make an AI which has a wide variety of knowledge, yet be as small as possible!\nSo we use 3 dataset:-\n\n\n1. Movielines The movie lines promote longer and more thought out responses but it can be very random. About 200k lines!\n2. Discord Messages The messages are on a wide variety of topics filtered and removed spam which makes the AI highly random but gives it a very random response to every days questions! about 120 million messages!\n3. Custom dataset scrapped from my messages, These messages are very narrow teaching this dataset and sending a random reply will make the AI say sorry loads of time!\n\n\nTraining\n--------\n\n\nThe Discord Messages Dataset simply dwarfs the other datasets, Hence the data sets are repeated.\nThis leads to them covering each others issues!\n\n\nThe AI has a context of 6 messages which means it will reply until the 4th message from user.\nExample\n\n\nTips for Hugging Face interference\n----------------------------------\n\n\n\n```\nI recommend send the user input,\nprevious 3 AI and human responses.\n\nUsing more context than this will lead to useless responses but using less is alright but the responses may be random. \n\n```\n\nEvaluation\n----------\n\n\nBelow is a comparison of Aeona vs. other baselines on the mixed dataset given above using automatic evaluation metrics.\n\n\n\nUsage\n-----\n\n\nExample:"
] |
text-generation
|
transformers
|
An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small).
Trained on:
https://www.kaggle.com/Cornell-University/movie-dialog-corpus
https://www.kaggle.com/jef1056/discord-data
Important:
The AI can be a bit weird at times as it is still undergoing training!
At times it send stuff using :<random_wierd_words>: as they are discord emotes.
It also send random @RandomName as it is trying to ping people.
This works well on discord but on the web not so much but it is easy enough to remove such stuff using [re.sub](https://docs.python.org/3/library/re.html#re.sub)
Issues:
The AI like with all conversation AI lacks a character, it changes its name way too often. This can be solved using an AIML chatbot to give it a stable character!
[Live Demo](https://dumbot-331213.uc.r.appspot.com/)
Example:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("deepparag/DumBot")
model = AutoModelWithLMHead.from_pretrained("deepparag/DumBot")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=4,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("DumBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://cdn.discordapp.com/app-icons/870239976690970625/c02cae78ae105f07969cfd8f8ea3d0a0.png"}
|
deepparag/DumBot-Beta
| null |
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt_neo #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
An generative AI made using microsoft/DialoGPT-small.
Trained on:
URL
URL
Important:
The AI can be a bit weird at times as it is still undergoing training!
At times it send stuff using :<random_wierd_words>: as they are discord emotes.
It also send random @RandomName as it is trying to ping people.
This works well on discord but on the web not so much but it is easy enough to remove such stuff using URL
Issues:
The AI like with all conversation AI lacks a character, it changes its name way too often. This can be solved using an AIML chatbot to give it a stable character!
Live Demo
Example:
|
[] |
[
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# THIS AI IS OUTDATED. See [Aeona](https://huggingface.co/deepparag/Aeona)
An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small).
Trained on:
https://www.kaggle.com/Cornell-University/movie-dialog-corpus
https://www.kaggle.com/jef1056/discord-data
[Live Demo](https://dumbot-331213.uc.r.appspot.com/)
Example:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("deepparag/DumBot")
model = AutoModelWithLMHead.from_pretrained("deepparag/DumBot")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=4,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("DumBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://cdn.discordapp.com/app-icons/870239976690970625/c02cae78ae105f07969cfd8f8ea3d0a0.png"}
|
deepparag/DumBot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# THIS AI IS OUTDATED. See Aeona
An generative AI made using microsoft/DialoGPT-small.
Trained on:
URL
URL
Live Demo
Example:
|
[
"# THIS AI IS OUTDATED. See Aeona\nAn generative AI made using microsoft/DialoGPT-small.\n\nTrained on:\n\n URL\n\n URL\n\n\n \nLive Demo\n \nExample:"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# THIS AI IS OUTDATED. See Aeona\nAn generative AI made using microsoft/DialoGPT-small.\n\nTrained on:\n\n URL\n\n URL\n\n\n \nLive Demo\n \nExample:"
] |
question-answering
|
transformers
|
This is a BERT base cased model trained on SQuAD v2
|
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/bert-base-cased-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 71.1517, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGZlNmQ1YzIzMWUzNTg4YmI4NWVhYThiMzE2ZGZmNWUzNDM3NWI0ZGJkNzliNGUxNTY2MDA5MWVkYjAwYWZiMCIsInZlcnNpb24iOjF9.iUvVdy5c4hoXkwlThJankQqG9QXzNilvfF1_4P0oL8X-jkY5Q6YSsZx6G6cpgXogqFpn7JlE_lP6_OT0VIamCg"}, {"type": "f1", "value": 74.6714, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWE5OGNjODhmY2Y0NWIyZDIzMmQ2NmRjZGYyYTYzOWMxZDUzYzg4YjBhNTRiNTY4NTc0M2IxNjI5NWI5ZDM0NCIsInZlcnNpb24iOjF9.IqU9rbzUcKmDEoLkwCUZTKSH0ZFhtqgnhOaEDKKnaRMGBJLj98D5V4VirYT6jLh8FlR0FiwvMTMjReBcfTisAQ"}]}]}]}
|
deepset/bert-base-cased-squad2
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #safetensors #bert #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us
|
This is a BERT base cased model trained on SQuAD v2
|
[] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #bert #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us \n"
] |
text-classification
|
transformers
|
This is a German BERT v1 (https://deepset.ai/german-bert) trained to do hate speech detection on the GermEval18Coarse dataset
|
{"license": "cc-by-4.0"}
|
deepset/bert-base-german-cased-hatespeech-GermEval18Coarse
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #safetensors #bert #text-classification #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
This is a German BERT v1 (URL trained to do hate speech detection on the GermEval18Coarse dataset
|
[] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #bert #text-classification #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
<a href="https://huggingface.co/exbert/?model=bert-base-german-cased">
\t<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
# German BERT with old vocabulary
For details see the related [FARM issue](https://github.com/deepset-ai/FARM/issues/60).
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "tags": ["exbert"], "thumbnail": "https://static.tildacdn.com/tild6438-3730-4164-b266-613634323466/german_bert.png"}
|
deepset/bert-base-german-cased-oldvocab
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"exbert",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #exbert #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
<a href="URL
\t<img width="300px" src="URL
</a>
# German BERT with old vocabulary
For details see the related FARM issue.
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Slack | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# German BERT with old vocabulary\nFor details see the related FARM issue.",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #exbert #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# German BERT with old vocabulary\nFor details see the related FARM issue.",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
# bert-base-uncased for QA
## Overview
**Language model:** bert-base-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Infrastructure**: 1x Tesla v100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "bert-base-uncased"
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Performance
```
"exact": 73.67977764676156
"f1": 77.87647139308865
```
## Authors
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
- Michel Bartels: `michel.bartels [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/bert-base-uncased-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 75.6529, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY2YmQ0ZDFjMjRlZWRiZWQ2YWQ4MTM0ODkyYTQ0NmYwMzBlNWViZWQ0ODFhMGJmMmY4ZGYwOTQyMDAyZGNjYyIsInZlcnNpb24iOjF9.UyqonQTsCB0BW86LfPy17kLt3a4r3wMeh04MDam5t_UhElp6N02YpiKOqcb1ethNHjAR0WGyxrcV3TI4d-wFAQ"}, {"type": "f1", "value": 78.6191, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWRkZWVjMDU2YTcxYWVkZTU1YmUzY2FkNWI5NDJkM2YwMjFmMmE0Njc3MjI5N2Q0NDdhZDNkZWNjMWE5YTRmZiIsInZlcnNpb24iOjF9.ol0Zacd9ZryXazXjgVssGFYG4s5FzbhGGaj1ZEDLVN2ziyzx23bo4GH9PSuGTFxRK2BO5_dxvDupLRqJOF59Bg"}]}]}]}
|
deepset/bert-base-uncased-squad2
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #bert #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us
|
# bert-base-uncased for QA
## Overview
Language model: bert-base-uncased
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
Infrastructure: 1x Tesla v100
## Hyperparameters
## Performance
## Authors
- Timo Möller: 'timo.moeller [at] URL'
- Julian Risch: 'URL [at] URL'
- Malte Pietsch: 'malte.pietsch [at] URL'
- Michel Bartels: 'michel.bartels [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# bert-base-uncased for QA",
"## Overview\nLanguage model: bert-base-uncased \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nInfrastructure: 1x Tesla v100",
"## Hyperparameters",
"## Performance",
"## Authors\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Risch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'\n- Michel Bartels: 'michel.bartels [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# bert-base-uncased for QA",
"## Overview\nLanguage model: bert-base-uncased \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nInfrastructure: 1x Tesla v100",
"## Hyperparameters",
"## Performance",
"## Authors\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Risch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'\n- Michel Bartels: 'michel.bartels [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
# bert-large-uncased-whole-word-masking-squad2
This is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering.
## Overview
**Language model:** bert-large
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/bert-large-uncased-whole-word-masking-squad2")
# or
reader = TransformersReader(model_name_or_path="FILL",tokenizer="deepset/bert-large-uncased-whole-word-masking-squad2")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/bert-large-uncased-whole-word-masking-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/bert-large-uncased-whole-word-masking-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 80.8846, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2E5ZGNkY2ExZWViZGEwNWE3OGRmMWM2ZmE4ZDU4ZDQ1OGM3ZWE0NTVmZjFmYmZjZmJmNjJmYTc3NTM3OTk3OSIsInZlcnNpb24iOjF9.aSblF4ywh1fnHHrN6UGL392R5KLaH3FCKQlpiXo_EdQ4XXEAENUCjYm9HWDiFsgfSENL35GkbSyz_GAhnefsAQ"}, {"type": "f1", "value": 83.8765, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGFlNmEzMTk2NjRkNTI3ZTk3ZTU1NWNlYzIyN2E0ZDFlNDA2ZjYwZWJlNThkMmRmMmE0YzcwYjIyZDM5NmRiMCIsInZlcnNpb24iOjF9.-rc2_Bsp_B26-o12MFYuAU0Ad2Hg9PDx7Preuk27WlhYJDeKeEr32CW8LLANQABR3Mhw2x8uTYkEUrSDMxxLBw"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 85.904, "name": "Exact Match"}, {"type": "f1", "value": 92.586, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "adversarial_qa", "type": "adversarial_qa", "config": "adversarialQA", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 28.233, "name": "Exact Match"}, {"type": "f1", "value": 41.17, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_adversarial", "type": "squad_adversarial", "config": "AddOneSent", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 78.064, "name": "Exact Match"}, {"type": "f1", "value": 83.591, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts amazon", "type": "squadshifts", "config": "amazon", "split": "test"}, "metrics": [{"type": "exact_match", "value": 65.615, "name": "Exact Match"}, {"type": "f1", "value": 80.733, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts new_wiki", "type": "squadshifts", "config": "new_wiki", "split": "test"}, "metrics": [{"type": "exact_match", "value": 81.57, "name": "Exact Match"}, {"type": "f1", "value": 91.199, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts nyt", "type": "squadshifts", "config": "nyt", "split": "test"}, "metrics": [{"type": "exact_match", "value": 83.279, "name": "Exact Match"}, {"type": "f1", "value": 91.09, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts reddit", "type": "squadshifts", "config": "reddit", "split": "test"}, "metrics": [{"type": "exact_match", "value": 69.305, "name": "Exact Match"}, {"type": "f1", "value": 82.405, "name": "F1"}]}]}]}
|
deepset/bert-large-uncased-whole-word-masking-squad2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #bert #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us
|
# bert-large-uncased-whole-word-masking-squad2
This is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering.
## Overview
Language model: bert-large
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
Code: See an example QA pipeline on Haystack
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:
### In Transformers
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
</div>
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- Distilled roberta-base-squad2 (aka "tinyroberta-squad2")
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="URL repo and <strong><a href="URL">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="URL community open to everyone!</a></strong></p>
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# bert-large-uncased-whole-word-masking-squad2\n\nThis is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering.",
"## Overview\nLanguage model: bert-large \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See an example QA pipeline on Haystack",
"## Usage",
"### In Haystack\nHaystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:",
"### In Transformers",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# bert-large-uncased-whole-word-masking-squad2\n\nThis is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering.",
"## Overview\nLanguage model: bert-large \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See an example QA pipeline on Haystack",
"## Usage",
"### In Haystack\nHaystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:",
"### In Transformers",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
## Overview
**Language model:** deepset/roberta-base-squad2-distilled
**Language:** English
**Training data:** SQuAD 2.0 training set
**Eval data:** SQuAD 2.0 dev set
**Infrastructure**: 1x V100 GPU
**Published**: Apr 21st, 2021
## Details
- haystack's distillation feature was used for training. deepset/bert-large-uncased-whole-word-masking-squad2 was used as the teacher model.
## Hyperparameters
```
batch_size = 6
n_epochs = 2
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 5
distillation_loss_weight = 1
```
## Performance
```
"exact": 68.6431398972458
"f1": 72.7637083790805
```
## Authors
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
- Michel Bartels: `michel.bartels [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["squad_v2"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg", "model-index": [{"name": "deepset/bert-medium-squad2-distilled", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 69.8231, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmE4MGRkZTVjNmViMGNjYjVhY2E1NzcyOGQ1OWE1MWMzMjY5NWU0MmU0Y2I4OWU4YTU5OWQ5YTI2NWE1NmM0ZSIsInZlcnNpb24iOjF9.tnCJvWzMctTwiQu5yig_owO2ZI1t1MZz1AN2lQy4COAGOzuMovD-74acQvMbxJQoRfNNkIetz2hqYivf1lJKDw"}, {"type": "f1", "value": 72.9232, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTMwNzk0ZDRjNGUyMjQyNzc1NzczZmUwMTU2MTM5MGQ3M2NhODlmOTU4ZDI0YjhlNTVjNDA1MGEwM2M1MzIyZSIsInZlcnNpb24iOjF9.eElGmTOXH_qHTNaPwZ-dUJfVz9VMvCutDCof_6UG_625MwctT_j7iVkWcGwed4tUnunuq1BPm-0iRh1RuuB-AQ"}]}]}]}
|
deepset/bert-medium-squad2-distilled
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"exbert",
"en",
"dataset:squad_v2",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #bert #question-answering #exbert #en #dataset-squad_v2 #license-mit #model-index #endpoints_compatible #has_space #region-us
|
## Overview
Language model: deepset/roberta-base-squad2-distilled
Language: English
Training data: SQuAD 2.0 training set
Eval data: SQuAD 2.0 dev set
Infrastructure: 1x V100 GPU
Published: Apr 21st, 2021
## Details
- haystack's distillation feature was used for training. deepset/bert-large-uncased-whole-word-masking-squad2 was used as the teacher model.
## Hyperparameters
## Performance
## Authors
- Timo Möller: 'timo.moeller [at] URL'
- Julian Risch: 'URL [at] URL'
- Malte Pietsch: 'malte.pietsch [at] URL'
- Michel Bartels: 'michel.bartels [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"## Overview\nLanguage model: deepset/roberta-base-squad2-distilled \nLanguage: English \nTraining data: SQuAD 2.0 training set \nEval data: SQuAD 2.0 dev set \nInfrastructure: 1x V100 GPU \nPublished: Apr 21st, 2021",
"## Details\n- haystack's distillation feature was used for training. deepset/bert-large-uncased-whole-word-masking-squad2 was used as the teacher model.",
"## Hyperparameters",
"## Performance",
"## Authors\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Risch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'\n- Michel Bartels: 'michel.bartels [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #question-answering #exbert #en #dataset-squad_v2 #license-mit #model-index #endpoints_compatible #has_space #region-us \n",
"## Overview\nLanguage model: deepset/roberta-base-squad2-distilled \nLanguage: English \nTraining data: SQuAD 2.0 training set \nEval data: SQuAD 2.0 dev set \nInfrastructure: 1x V100 GPU \nPublished: Apr 21st, 2021",
"## Details\n- haystack's distillation feature was used for training. deepset/bert-large-uncased-whole-word-masking-squad2 was used as the teacher model.",
"## Hyperparameters",
"## Performance",
"## Authors\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Risch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'\n- Michel Bartels: 'michel.bartels [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
# electra-base for QA
## Overview
**Language model:** electra-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
**Infrastructure**: 1x Tesla v100
## Hyperparameters
```
seed=42
batch_size = 32
n_epochs = 5
base_LM_model = "google/electra-base-discriminator"
max_seq_len = 384
learning_rate = 1e-4
lr_schedule = LinearWarmup
warmup_proportion = 0.1
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 77.30144024256717,
"f1": 81.35438272008543,
"total": 11873,
"HasAns_exact": 74.34210526315789,
"HasAns_f1": 82.45961302894314,
"HasAns_total": 5928,
"NoAns_exact": 80.25231286795626,
"NoAns_f1": 80.25231286795626,
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/electra-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and lets people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/electra-base-squad2"
# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and lets people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of a single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/electra-base-squad2")
# or
reader = TransformersReader(model="deepset/electra-base-squad2",tokenizer="deepset/electra-base-squad2")
```
## Authors
Vaishali Pal `vaishali.pal [at] deepset.ai`
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/electra-base-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 77.6074, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzE5NTRmMmUwYTk1MTI0NjM0ZmQwNDFmM2Y4Mjk4ZWYxOGVmOWI3ZGFiNWM4OTUxZDQ2ZjdmNmU3OTk5ZjRjYyIsInZlcnNpb24iOjF9.0VZRewdiovE4z3K5box5R0oTT7etpmd0BX44FJBLRFfot-uJ915b-bceSv3luJQ7ENPjaYSa7o7jcHlDzn3oAw"}, {"type": "f1", "value": 81.7181, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2VlMzM0Y2UzYjhhNTJhMTFiYWZmMDNjNjRiZDgwYzc5NWE3N2M4ZGFlYWQ0ZjVkZTE2MDU0YmMzMDc1MTY5MCIsInZlcnNpb24iOjF9.jRV58UxOM7CJJSsmxJuZvlt00jMGA1thp4aqtcFi1C8qViQ1kW7NYz8rg1gNTDZNez2UwPS1NgN_HnnwBHPbCQ"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 80.407, "name": "Exact Match"}, {"type": "f1", "value": 88.942, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "adversarial_qa", "type": "adversarial_qa", "config": "adversarialQA", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 23.533, "name": "Exact Match"}, {"type": "f1", "value": 36.521, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_adversarial", "type": "squad_adversarial", "config": "AddOneSent", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 73.867, "name": "Exact Match"}, {"type": "f1", "value": 81.381, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts amazon", "type": "squadshifts", "config": "amazon", "split": "test"}, "metrics": [{"type": "exact_match", "value": 64.512, "name": "Exact Match"}, {"type": "f1", "value": 80.166, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts new_wiki", "type": "squadshifts", "config": "new_wiki", "split": "test"}, "metrics": [{"type": "exact_match", "value": 76.568, "name": "Exact Match"}, {"type": "f1", "value": 87.706, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts nyt", "type": "squadshifts", "config": "nyt", "split": "test"}, "metrics": [{"type": "exact_match", "value": 77.884, "name": "Exact Match"}, {"type": "f1", "value": 87.858, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts reddit", "type": "squadshifts", "config": "reddit", "split": "test"}, "metrics": [{"type": "exact_match", "value": 64.399, "name": "Exact Match"}, {"type": "f1", "value": 78.096, "name": "F1"}]}]}]}
|
deepset/electra-base-squad2
| null |
[
"transformers",
"pytorch",
"safetensors",
"electra",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #electra #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us
|
# electra-base for QA
## Overview
Language model: electra-base
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
Code: See example in FARM
Infrastructure: 1x Tesla v100
## Hyperparameters
## Performance
Evaluated on the SQuAD 2.0 dev set with the official eval script.
## Usage
### In Transformers
### In FARM
### In haystack
For doing QA at scale (i.e. many docs instead of a single paragraph), you can load the model also in haystack:
## Authors
Vaishali Pal 'URL [at] URL'
Branden Chan: 'URL [at] URL'
Timo Möller: 'timo.moeller [at] URL'
Malte Pietsch: 'malte.pietsch [at] URL'
Tanay Soni: 'URL [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# electra-base for QA",
"## Overview\nLanguage model: electra-base \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See example in FARM \nInfrastructure: 1x Tesla v100",
"## Hyperparameters",
"## Performance\nEvaluated on the SQuAD 2.0 dev set with the official eval script.",
"## Usage",
"### In Transformers",
"### In FARM",
"### In haystack\nFor doing QA at scale (i.e. many docs instead of a single paragraph), you can load the model also in haystack:",
"## Authors\nVaishali Pal 'URL [at] URL' \nBranden Chan: 'URL [at] URL' \nTimo Möller: 'timo.moeller [at] URL' \nMalte Pietsch: 'malte.pietsch [at] URL' \nTanay Soni: 'URL [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n\nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #electra #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# electra-base for QA",
"## Overview\nLanguage model: electra-base \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See example in FARM \nInfrastructure: 1x Tesla v100",
"## Hyperparameters",
"## Performance\nEvaluated on the SQuAD 2.0 dev set with the official eval script.",
"## Usage",
"### In Transformers",
"### In FARM",
"### In haystack\nFor doing QA at scale (i.e. many docs instead of a single paragraph), you can load the model also in haystack:",
"## Authors\nVaishali Pal 'URL [at] URL' \nBranden Chan: 'URL [at] URL' \nTimo Möller: 'timo.moeller [at] URL' \nMalte Pietsch: 'malte.pietsch [at] URL' \nTanay Soni: 'URL [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n\nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
null |
transformers
|

## Overview
**Language model:** gbert-base-germandpr
**Language:** German
**Training data:** GermanDPR train set (~ 56MB)
**Eval data:** GermanDPR test set (~ 6MB)
**Infrastructure**: 4x V100 GPU
**Published**: Apr 26th, 2021
## Details
- We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages.
- The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad).
- It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set.
For each pair, there are one positive context and three hard negative contexts.
- As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files).
- The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia.
See https://deepset.ai/germanquad for more details and dataset download.
## Hyperparameters
```
batch_size = 40
n_epochs = 20
num_training_steps = 4640
num_warmup_steps = 460
max_seq_len = 32 tokens for question encoder and 300 tokens for passage encoder
learning_rate = 1e-6
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
num_hard_negatives = 2
```
## Performance
During training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set.
The dev split contained 1030 question/answer pairs.
Even without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results.
Note that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier.
After fixing the hyperparameters we trained the model on the full GermanDPR train set.
We further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.

## Usage
### In haystack
You can load the model in [haystack](https://github.com/deepset-ai/haystack/) as a retriever for doing QA at scale:
```python
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="deepset/gbert-base-germandpr-question_encoder"
passage_embedding_model="deepset/gbert-base-germandpr-ctx_encoder"
)
```
## Authors
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "tags": ["exbert"], "datasets": ["deepset/germandpr"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
|
deepset/gbert-base-germandpr-ctx_encoder
| null |
[
"transformers",
"pytorch",
"dpr",
"exbert",
"de",
"dataset:deepset/germandpr",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #dpr #exbert #de #dataset-deepset/germandpr #license-mit #endpoints_compatible #has_space #region-us
|
!bert_image
## Overview
Language model: gbert-base-germandpr
Language: German
Training data: GermanDPR train set (~ 56MB)
Eval data: GermanDPR test set (~ 6MB)
Infrastructure: 4x V100 GPU
Published: Apr 26th, 2021
## Details
- We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages.
- The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published online.
- It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set.
For each pair, there are one positive context and three hard negative contexts.
- As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files).
- The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia.
See URL for more details and dataset download.
## Hyperparameters
## Performance
During training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set.
The dev split contained 1030 question/answer pairs.
Even without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results.
Note that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier.
After fixing the hyperparameters we trained the model on the full GermanDPR train set.
We further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.
!performancetable
## Usage
### In haystack
You can load the model in haystack as a retriever for doing QA at scale:
## Authors
- Timo Möller: 'timo.moeller [at] URL'
- Julian Risch: 'URL [at] URL'
- Malte Pietsch: 'malte.pietsch [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Website
By the way: we're hiring!
|
[
"## Overview\nLanguage model: gbert-base-germandpr \nLanguage: German \nTraining data: GermanDPR train set (~ 56MB) \nEval data: GermanDPR test set (~ 6MB) \nInfrastructure: 4x V100 GPU \nPublished: Apr 26th, 2021",
"## Details\n- We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages.\n- The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published online.\n- It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set.\nFor each pair, there are one positive context and three hard negative contexts.\n- As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files).\n- The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia.\n\nSee URL for more details and dataset download.",
"## Hyperparameters",
"## Performance\nDuring training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set.\nThe dev split contained 1030 question/answer pairs.\nEven without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results.\nNote that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier.\nAfter fixing the hyperparameters we trained the model on the full GermanDPR train set.\n \nWe further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.\n!performancetable",
"## Usage",
"### In haystack\nYou can load the model in haystack as a retriever for doing QA at scale:",
"## Authors\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Risch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Website \n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #dpr #exbert #de #dataset-deepset/germandpr #license-mit #endpoints_compatible #has_space #region-us \n",
"## Overview\nLanguage model: gbert-base-germandpr \nLanguage: German \nTraining data: GermanDPR train set (~ 56MB) \nEval data: GermanDPR test set (~ 6MB) \nInfrastructure: 4x V100 GPU \nPublished: Apr 26th, 2021",
"## Details\n- We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages.\n- The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published online.\n- It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set.\nFor each pair, there are one positive context and three hard negative contexts.\n- As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files).\n- The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia.\n\nSee URL for more details and dataset download.",
"## Hyperparameters",
"## Performance\nDuring training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set.\nThe dev split contained 1030 question/answer pairs.\nEven without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results.\nNote that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier.\nAfter fixing the hyperparameters we trained the model on the full GermanDPR train set.\n \nWe further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.\n!performancetable",
"## Usage",
"### In haystack\nYou can load the model in haystack as a retriever for doing QA at scale:",
"## Authors\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Risch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Website \n\nBy the way: we're hiring!"
] |
feature-extraction
|
transformers
|

## Overview
**Language model:** gbert-base-germandpr
**Language:** German
**Training data:** GermanDPR train set (~ 56MB)
**Eval data:** GermanDPR test set (~ 6MB)
**Infrastructure**: 4x V100 GPU
**Published**: Apr 26th, 2021
## Details
- We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages.
- The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad).
- It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set.
For each pair, there are one positive context and three hard negative contexts.
- As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files).
- The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia.
See https://deepset.ai/germanquad for more details and dataset download.
## Hyperparameters
```
batch_size = 40
n_epochs = 20
num_training_steps = 4640
num_warmup_steps = 460
max_seq_len = 32 tokens for question encoder and 300 tokens for passage encoder
learning_rate = 1e-6
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
num_hard_negatives = 2
```
## Performance
During training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set.
The dev split contained 1030 question/answer pairs.
Even without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results.
Note that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier.
After fixing the hyperparameters we trained the model on the full GermanDPR train set.
We further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.

## Usage
### In haystack
You can load the model in [haystack](https://github.com/deepset-ai/haystack/) as a retriever for doing QA at scale:
```python
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="deepset/gbert-base-germandpr-question_encoder"
passage_embedding_model="deepset/gbert-base-germandpr-ctx_encoder"
)
```
## Authors
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "tags": ["exbert"], "datasets": ["deepset/germandpr"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
|
deepset/gbert-base-germandpr-question_encoder
| null |
[
"transformers",
"pytorch",
"safetensors",
"dpr",
"feature-extraction",
"exbert",
"de",
"dataset:deepset/germandpr",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #safetensors #dpr #feature-extraction #exbert #de #dataset-deepset/germandpr #license-mit #endpoints_compatible #has_space #region-us
|
!bert_image
## Overview
Language model: gbert-base-germandpr
Language: German
Training data: GermanDPR train set (~ 56MB)
Eval data: GermanDPR test set (~ 6MB)
Infrastructure: 4x V100 GPU
Published: Apr 26th, 2021
## Details
- We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages.
- The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published online.
- It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set.
For each pair, there are one positive context and three hard negative contexts.
- As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files).
- The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia.
See URL for more details and dataset download.
## Hyperparameters
## Performance
During training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set.
The dev split contained 1030 question/answer pairs.
Even without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results.
Note that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier.
After fixing the hyperparameters we trained the model on the full GermanDPR train set.
We further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.
!performancetable
## Usage
### In haystack
You can load the model in haystack as a retriever for doing QA at scale:
## Authors
- Timo Möller: 'timo.moeller [at] URL'
- Julian Risch: 'URL [at] URL'
- Malte Pietsch: 'malte.pietsch [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Website
By the way: we're hiring!
|
[
"## Overview\nLanguage model: gbert-base-germandpr \nLanguage: German \nTraining data: GermanDPR train set (~ 56MB) \nEval data: GermanDPR test set (~ 6MB) \nInfrastructure: 4x V100 GPU \nPublished: Apr 26th, 2021",
"## Details\n- We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages.\n- The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published online.\n- It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set.\nFor each pair, there are one positive context and three hard negative contexts.\n- As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files).\n- The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia.\n\nSee URL for more details and dataset download.",
"## Hyperparameters",
"## Performance\nDuring training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set.\nThe dev split contained 1030 question/answer pairs.\nEven without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results.\nNote that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier.\nAfter fixing the hyperparameters we trained the model on the full GermanDPR train set.\n \nWe further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.\n!performancetable",
"## Usage",
"### In haystack\nYou can load the model in haystack as a retriever for doing QA at scale:",
"## Authors\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Risch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Website \n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #dpr #feature-extraction #exbert #de #dataset-deepset/germandpr #license-mit #endpoints_compatible #has_space #region-us \n",
"## Overview\nLanguage model: gbert-base-germandpr \nLanguage: German \nTraining data: GermanDPR train set (~ 56MB) \nEval data: GermanDPR test set (~ 6MB) \nInfrastructure: 4x V100 GPU \nPublished: Apr 26th, 2021",
"## Details\n- We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages.\n- The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published online.\n- It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set.\nFor each pair, there are one positive context and three hard negative contexts.\n- As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files).\n- The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia.\n\nSee URL for more details and dataset download.",
"## Hyperparameters",
"## Performance\nDuring training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set.\nThe dev split contained 1030 question/answer pairs.\nEven without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results.\nNote that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier.\nAfter fixing the hyperparameters we trained the model on the full GermanDPR train set.\n \nWe further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.\n!performancetable",
"## Usage",
"### In haystack\nYou can load the model in haystack as a retriever for doing QA at scale:",
"## Authors\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Risch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Website \n\nBy the way: we're hiring!"
] |
text-classification
|
transformers
|
## Overview
**Language model:** gbert-base-germandpr-reranking
**Language:** German
**Training data:** GermanDPR train set (~ 56MB)
**Eval data:** GermanDPR test set (~ 6MB)
**Infrastructure**: 1x V100 GPU
**Published**: June 3rd, 2021
## Details
- We trained a text pair classification model in FARM, which can be used for reranking in document retrieval tasks. To this end, the classifier calculates the similarity of the query and each retrieved top k document (e.g., k=10). The top k documents are then sorted by their similarity scores. The document most similar to the query is the best.
## Hyperparameters
```
batch_size = 16
n_epochs = 2
max_seq_len = 512 tokens for question and passage concatenated
learning_rate = 2e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
```
## Performance
We use the GermanDPR test dataset as ground truth labels and run two experiments to compare how a BM25 retriever performs with or without reranking with our model. The first experiment runs retrieval on the full German Wikipedia (more than 2 million passages) and second experiment runs retrieval on the GermanDPR dataset only (not more than 5000 passages). Both experiments use 1025 queries. Note that the second experiment is evaluating on a much simpler task because of the smaller dataset size, which explains strong BM25 retrieval performance.
### Full German Wikipedia (more than 2 million passages):
BM25 Retriever without Reranking
- recall@3: 0.4088 (419 / 1025)
- mean_reciprocal_rank@3: 0.3322
BM25 Retriever with Reranking Top 10 Documents
- recall@3: 0.5200 (533 / 1025)
- mean_reciprocal_rank@3: 0.4800
### GermanDPR Test Dataset only (not more than 5000 passages):
BM25 Retriever without Reranking
- recall@3: 0.9102 (933 / 1025)
- mean_reciprocal_rank@3: 0.8528
BM25 Retriever with Reranking Top 10 Documents
- recall@3: 0.9298 (953 / 1025)
- mean_reciprocal_rank@3: 0.8813
## Usage
### In haystack
You can load the model in [haystack](https://github.com/deepset-ai/haystack/) for reranking the documents returned by a Retriever:
```python
...
retriever = ElasticsearchRetriever(document_store=document_store)
ranker = FARMRanker(model_name_or_path="deepset/gbert-base-germandpr-reranking")
...
p = Pipeline()
p.add_node(component=retriever, name="ESRetriever", inputs=["Query"])
p.add_node(component=ranker, name="Ranker", inputs=["ESRetriever"])
)
```
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "datasets": ["deepset/germandpr"]}
|
deepset/gbert-base-germandpr-reranking
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"de",
"dataset:deepset/germandpr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #safetensors #bert #text-classification #de #dataset-deepset/germandpr #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
## Overview
Language model: gbert-base-germandpr-reranking
Language: German
Training data: GermanDPR train set (~ 56MB)
Eval data: GermanDPR test set (~ 6MB)
Infrastructure: 1x V100 GPU
Published: June 3rd, 2021
## Details
- We trained a text pair classification model in FARM, which can be used for reranking in document retrieval tasks. To this end, the classifier calculates the similarity of the query and each retrieved top k document (e.g., k=10). The top k documents are then sorted by their similarity scores. The document most similar to the query is the best.
## Hyperparameters
## Performance
We use the GermanDPR test dataset as ground truth labels and run two experiments to compare how a BM25 retriever performs with or without reranking with our model. The first experiment runs retrieval on the full German Wikipedia (more than 2 million passages) and second experiment runs retrieval on the GermanDPR dataset only (not more than 5000 passages). Both experiments use 1025 queries. Note that the second experiment is evaluating on a much simpler task because of the smaller dataset size, which explains strong BM25 retrieval performance.
### Full German Wikipedia (more than 2 million passages):
BM25 Retriever without Reranking
- recall@3: 0.4088 (419 / 1025)
- mean_reciprocal_rank@3: 0.3322
BM25 Retriever with Reranking Top 10 Documents
- recall@3: 0.5200 (533 / 1025)
- mean_reciprocal_rank@3: 0.4800
### GermanDPR Test Dataset only (not more than 5000 passages):
BM25 Retriever without Reranking
- recall@3: 0.9102 (933 / 1025)
- mean_reciprocal_rank@3: 0.8528
BM25 Retriever with Reranking Top 10 Documents
- recall@3: 0.9298 (953 / 1025)
- mean_reciprocal_rank@3: 0.8813
## Usage
### In haystack
You can load the model in haystack for reranking the documents returned by a Retriever:
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Website
By the way: we're hiring!
|
[
"## Overview\nLanguage model: gbert-base-germandpr-reranking \nLanguage: German \nTraining data: GermanDPR train set (~ 56MB) \nEval data: GermanDPR test set (~ 6MB) \nInfrastructure: 1x V100 GPU \nPublished: June 3rd, 2021",
"## Details\n- We trained a text pair classification model in FARM, which can be used for reranking in document retrieval tasks. To this end, the classifier calculates the similarity of the query and each retrieved top k document (e.g., k=10). The top k documents are then sorted by their similarity scores. The document most similar to the query is the best.",
"## Hyperparameters",
"## Performance\nWe use the GermanDPR test dataset as ground truth labels and run two experiments to compare how a BM25 retriever performs with or without reranking with our model. The first experiment runs retrieval on the full German Wikipedia (more than 2 million passages) and second experiment runs retrieval on the GermanDPR dataset only (not more than 5000 passages). Both experiments use 1025 queries. Note that the second experiment is evaluating on a much simpler task because of the smaller dataset size, which explains strong BM25 retrieval performance.",
"### Full German Wikipedia (more than 2 million passages):\nBM25 Retriever without Reranking\n- recall@3: 0.4088 (419 / 1025)\n- mean_reciprocal_rank@3: 0.3322\n\nBM25 Retriever with Reranking Top 10 Documents\n- recall@3: 0.5200 (533 / 1025)\n- mean_reciprocal_rank@3: 0.4800",
"### GermanDPR Test Dataset only (not more than 5000 passages):\nBM25 Retriever without Reranking\n- recall@3: 0.9102 (933 / 1025)\n- mean_reciprocal_rank@3: 0.8528\n\nBM25 Retriever with Reranking Top 10 Documents\n- recall@3: 0.9298 (953 / 1025)\n- mean_reciprocal_rank@3: 0.8813",
"## Usage",
"### In haystack\nYou can load the model in haystack for reranking the documents returned by a Retriever:",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Website \n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #de #dataset-deepset/germandpr #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"## Overview\nLanguage model: gbert-base-germandpr-reranking \nLanguage: German \nTraining data: GermanDPR train set (~ 56MB) \nEval data: GermanDPR test set (~ 6MB) \nInfrastructure: 1x V100 GPU \nPublished: June 3rd, 2021",
"## Details\n- We trained a text pair classification model in FARM, which can be used for reranking in document retrieval tasks. To this end, the classifier calculates the similarity of the query and each retrieved top k document (e.g., k=10). The top k documents are then sorted by their similarity scores. The document most similar to the query is the best.",
"## Hyperparameters",
"## Performance\nWe use the GermanDPR test dataset as ground truth labels and run two experiments to compare how a BM25 retriever performs with or without reranking with our model. The first experiment runs retrieval on the full German Wikipedia (more than 2 million passages) and second experiment runs retrieval on the GermanDPR dataset only (not more than 5000 passages). Both experiments use 1025 queries. Note that the second experiment is evaluating on a much simpler task because of the smaller dataset size, which explains strong BM25 retrieval performance.",
"### Full German Wikipedia (more than 2 million passages):\nBM25 Retriever without Reranking\n- recall@3: 0.4088 (419 / 1025)\n- mean_reciprocal_rank@3: 0.3322\n\nBM25 Retriever with Reranking Top 10 Documents\n- recall@3: 0.5200 (533 / 1025)\n- mean_reciprocal_rank@3: 0.4800",
"### GermanDPR Test Dataset only (not more than 5000 passages):\nBM25 Retriever without Reranking\n- recall@3: 0.9102 (933 / 1025)\n- mean_reciprocal_rank@3: 0.8528\n\nBM25 Retriever with Reranking Top 10 Documents\n- recall@3: 0.9298 (953 / 1025)\n- mean_reciprocal_rank@3: 0.8813",
"## Usage",
"### In haystack\nYou can load the model in haystack for reranking the documents returned by a Retriever:",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Website \n\nBy the way: we're hiring!"
] |
fill-mask
|
transformers
|
# German BERT base
Released, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model and show that it outperforms its predecessors.
## Overview
**Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf)
**Architecture:** BERT base
**Language:** German
## Performance
```
GermEval18 Coarse: 78.17
GermEval18 Fine: 50.90
GermEval14: 87.98
```
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Stefan Schweter: `stefan [at] schweter.eu`
Timo Möller: `timo.moeller [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "datasets": ["wikipedia", "OPUS", "OpenLegalData"]}
|
deepset/gbert-base
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"fill-mask",
"de",
"dataset:wikipedia",
"dataset:OPUS",
"dataset:OpenLegalData",
"arxiv:2010.10906",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.10906"
] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #safetensors #fill-mask #de #dataset-wikipedia #dataset-OPUS #dataset-OpenLegalData #arxiv-2010.10906 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# German BERT base
Released, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model and show that it outperforms its predecessors.
## Overview
Paper: here
Architecture: BERT base
Language: German
## Performance
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: 'URL [at] URL'
Stefan Schweter: 'stefan [at] URL'
Timo Möller: 'timo.moeller [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Slack | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# German BERT base\n\nReleased, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka \"bert-base-german-cased\") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model and show that it outperforms its predecessors.",
"## Overview \nPaper: here \nArchitecture: BERT base \nLanguage: German",
"## Performance \n\n\nSee also: \ndeepset/gbert-base\ndeepset/gbert-large\ndeepset/gelectra-base\ndeepset/gelectra-large\ndeepset/gelectra-base-generator\ndeepset/gelectra-large-generator",
"## Authors\nBranden Chan: 'URL [at] URL'\nStefan Schweter: 'stefan [at] URL'\nTimo Möller: 'timo.moeller [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #fill-mask #de #dataset-wikipedia #dataset-OPUS #dataset-OpenLegalData #arxiv-2010.10906 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# German BERT base\n\nReleased, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka \"bert-base-german-cased\") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model and show that it outperforms its predecessors.",
"## Overview \nPaper: here \nArchitecture: BERT base \nLanguage: German",
"## Performance \n\n\nSee also: \ndeepset/gbert-base\ndeepset/gbert-large\ndeepset/gelectra-base\ndeepset/gelectra-large\ndeepset/gelectra-base-generator\ndeepset/gelectra-large-generator",
"## Authors\nBranden Chan: 'URL [at] URL'\nStefan Schweter: 'stefan [at] URL'\nTimo Möller: 'timo.moeller [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
text-classification
|
transformers
|
## Overview
**Language model:** gbert-large-sts
**Language:** German
**Training data:** German STS benchmark train and dev set
**Eval data:** German STS benchmark test set
**Infrastructure**: 1x V100 GPU
**Published**: August 12th, 2021
## Details
- We trained a gbert-large model on the task of estimating semantic similarity of German-language text pairs. The dataset is a machine-translated version of the [STS benchmark](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark), which is available [here](https://github.com/t-systems-on-site-services-gmbh/german-STSbenchmark).
## Hyperparameters
```
batch_size = 16
n_epochs = 4
warmup_ratio = 0.1
learning_rate = 2e-5
lr_schedule = LinearWarmup
```
## Performance
Stay tuned... and watch out for new papers on arxiv.org ;)
## Authors
- Julian Risch: `julian.risch [at] deepset.ai`
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Gutsch: `julian.gutsch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "tags": ["exbert"]}
|
deepset/gbert-large-sts
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"exbert",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #safetensors #bert #text-classification #exbert #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
## Overview
Language model: gbert-large-sts
Language: German
Training data: German STS benchmark train and dev set
Eval data: German STS benchmark test set
Infrastructure: 1x V100 GPU
Published: August 12th, 2021
## Details
- We trained a gbert-large model on the task of estimating semantic similarity of German-language text pairs. The dataset is a machine-translated version of the STS benchmark, which is available here.
## Hyperparameters
## Performance
Stay tuned... and watch out for new papers on URL ;)
## Authors
- Julian Risch: 'URL [at] URL'
- Timo Möller: 'timo.moeller [at] URL'
- Julian Gutsch: 'URL [at] URL'
- Malte Pietsch: 'malte.pietsch [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Website
By the way: we're hiring!
|
[
"## Overview\nLanguage model: gbert-large-sts\n\nLanguage: German \nTraining data: German STS benchmark train and dev set \nEval data: German STS benchmark test set \nInfrastructure: 1x V100 GPU \nPublished: August 12th, 2021",
"## Details\n- We trained a gbert-large model on the task of estimating semantic similarity of German-language text pairs. The dataset is a machine-translated version of the STS benchmark, which is available here.",
"## Hyperparameters",
"## Performance\nStay tuned... and watch out for new papers on URL ;)",
"## Authors\n- Julian Risch: 'URL [at] URL'\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Gutsch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Website \n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #exbert #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"## Overview\nLanguage model: gbert-large-sts\n\nLanguage: German \nTraining data: German STS benchmark train and dev set \nEval data: German STS benchmark test set \nInfrastructure: 1x V100 GPU \nPublished: August 12th, 2021",
"## Details\n- We trained a gbert-large model on the task of estimating semantic similarity of German-language text pairs. The dataset is a machine-translated version of the STS benchmark, which is available here.",
"## Hyperparameters",
"## Performance\nStay tuned... and watch out for new papers on URL ;)",
"## Authors\n- Julian Risch: 'URL [at] URL'\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Gutsch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Website \n\nBy the way: we're hiring!"
] |
fill-mask
|
transformers
|
# German BERT large
Released, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model and show that it outperforms its predecessors.
## Overview
**Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf)
**Architecture:** BERT large
**Language:** German
## Performance
```
GermEval18 Coarse: 80.08
GermEval18 Fine: 52.48
GermEval14: 88.16
```
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
**Branden Chan:** branden.chan@deepset.ai
**Stefan Schweter:** stefan@schweter.eu
**Timo Möller:** timo.moeller@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "datasets": ["wikipedia", "OPUS", "OpenLegalData", "oscar"]}
|
deepset/gbert-large
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"fill-mask",
"de",
"dataset:wikipedia",
"dataset:OPUS",
"dataset:OpenLegalData",
"dataset:oscar",
"arxiv:2010.10906",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.10906"
] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #safetensors #fill-mask #de #dataset-wikipedia #dataset-OPUS #dataset-OpenLegalData #dataset-oscar #arxiv-2010.10906 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# German BERT large
Released, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model and show that it outperforms its predecessors.
## Overview
Paper: here
Architecture: BERT large
Language: German
## Performance
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: URL@URL
Stefan Schweter: stefan@URL
Timo Möller: timo.moeller@URL
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
</div>
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- Distilled roberta-base-squad2 (aka "tinyroberta-squad2")
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="URL repo and <strong><a href="URL">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="URL community open to everyone!</a></strong></p>
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# German BERT large\n\nReleased, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka \"bert-base-german-cased\") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model and show that it outperforms its predecessors.",
"## Overview \nPaper: here \nArchitecture: BERT large \nLanguage: German",
"## Performance \n\n\nSee also: \ndeepset/gbert-base \ndeepset/gbert-large \ndeepset/gelectra-base \ndeepset/gelectra-large \ndeepset/gelectra-base-generator \ndeepset/gelectra-large-generator",
"## Authors\nBranden Chan: URL@URL \nStefan Schweter: stefan@URL \nTimo Möller: timo.moeller@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #fill-mask #de #dataset-wikipedia #dataset-OPUS #dataset-OpenLegalData #dataset-oscar #arxiv-2010.10906 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# German BERT large\n\nReleased, Oct 2020, this is a German BERT language model trained collaboratively by the makers of the original German BERT (aka \"bert-base-german-cased\") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model and show that it outperforms its predecessors.",
"## Overview \nPaper: here \nArchitecture: BERT large \nLanguage: German",
"## Performance \n\n\nSee also: \ndeepset/gbert-base \ndeepset/gbert-large \ndeepset/gelectra-base \ndeepset/gelectra-large \ndeepset/gelectra-base-generator \ndeepset/gelectra-large-generator",
"## Authors\nBranden Chan: URL@URL \nStefan Schweter: stefan@URL \nTimo Möller: timo.moeller@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
fill-mask
|
transformers
|
# German ELECTRA base generator
Released, Oct 2020, this is the generator component of the German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model.
The generator is useful for performing masking experiments. If you are looking for a regular language model for embedding extraction, or downstream tasks like NER, classification or QA, please use deepset/gelectra-base.
## Overview
**Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf)
**Architecture:** ELECTRA base (generator)
**Language:** German
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Stefan Schweter: `stefan [at] schweter.eu`
Timo Möller: `timo.moeller [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "datasets": ["wikipedia", "OPUS", "OpenLegalData"]}
|
deepset/gelectra-base-generator
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"electra",
"fill-mask",
"de",
"dataset:wikipedia",
"dataset:OPUS",
"dataset:OpenLegalData",
"arxiv:2010.10906",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.10906"
] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #safetensors #electra #fill-mask #de #dataset-wikipedia #dataset-OPUS #dataset-OpenLegalData #arxiv-2010.10906 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# German ELECTRA base generator
Released, Oct 2020, this is the generator component of the German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model.
The generator is useful for performing masking experiments. If you are looking for a regular language model for embedding extraction, or downstream tasks like NER, classification or QA, please use deepset/gelectra-base.
## Overview
Paper: here
Architecture: ELECTRA base (generator)
Language: German
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: 'URL [at] URL'
Stefan Schweter: 'stefan [at] URL'
Timo Möller: 'timo.moeller [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Slack | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# German ELECTRA base generator\n\nReleased, Oct 2020, this is the generator component of the German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka \"bert-base-german-cased\") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model.\n\nThe generator is useful for performing masking experiments. If you are looking for a regular language model for embedding extraction, or downstream tasks like NER, classification or QA, please use deepset/gelectra-base.",
"## Overview \nPaper: here \nArchitecture: ELECTRA base (generator)\nLanguage: German \n\nSee also: \ndeepset/gbert-base\ndeepset/gbert-large\ndeepset/gelectra-base\ndeepset/gelectra-large\ndeepset/gelectra-base-generator\ndeepset/gelectra-large-generator",
"## Authors\nBranden Chan: 'URL [at] URL'\nStefan Schweter: 'stefan [at] URL'\nTimo Möller: 'timo.moeller [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #electra #fill-mask #de #dataset-wikipedia #dataset-OPUS #dataset-OpenLegalData #arxiv-2010.10906 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# German ELECTRA base generator\n\nReleased, Oct 2020, this is the generator component of the German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka \"bert-base-german-cased\") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model.\n\nThe generator is useful for performing masking experiments. If you are looking for a regular language model for embedding extraction, or downstream tasks like NER, classification or QA, please use deepset/gelectra-base.",
"## Overview \nPaper: here \nArchitecture: ELECTRA base (generator)\nLanguage: German \n\nSee also: \ndeepset/gbert-base\ndeepset/gbert-large\ndeepset/gelectra-base\ndeepset/gelectra-large\ndeepset/gelectra-base-generator\ndeepset/gelectra-large-generator",
"## Authors\nBranden Chan: 'URL [at] URL'\nStefan Schweter: 'stefan [at] URL'\nTimo Möller: 'timo.moeller [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|

## Overview
**Language model:** gelectra-base-germanquad-distilled
**Language:** German
**Training data:** GermanQuAD train set (~ 12MB)
**Eval data:** GermanQuAD test set (~ 5MB)
**Infrastructure**: 1x V100 GPU
**Published**: Apr 21st, 2021
## Details
- We trained a German question answering model with a gelectra-base model as its basis.
- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad).
- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers.
- In addition to the annotations in GermanQuAD, haystack's distillation feature was used for training. deepset/gelectra-large-germanquad was used as the teacher model.
See https://deepset.ai/germanquad for more details and dataset download in SQuAD format.
## Hyperparameters
```
batch_size = 24
n_epochs = 6
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 2
distillation_loss_weight = 0.75
```
## Performance
We evaluated the extractive question answering performance on our GermanQuAD test set.
Model types and training data are included in the model name.
For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.
The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on \\\\germanquad.
The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.
```
"exact": 62.4773139745916
"f1": 80.9488017070188
```

## Authors
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
- Michel Bartels: `michel.bartels [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "tags": ["exbert"], "datasets": ["deepset/germanquad"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
|
deepset/gelectra-base-germanquad-distilled
| null |
[
"transformers",
"pytorch",
"safetensors",
"electra",
"question-answering",
"exbert",
"de",
"dataset:deepset/germanquad",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #safetensors #electra #question-answering #exbert #de #dataset-deepset/germanquad #license-mit #endpoints_compatible #region-us
|
!bert_image
## Overview
Language model: gelectra-base-germanquad-distilled
Language: German
Training data: GermanQuAD train set (~ 12MB)
Eval data: GermanQuAD test set (~ 5MB)
Infrastructure: 1x V100 GPU
Published: Apr 21st, 2021
## Details
- We trained a German question answering model with a gelectra-base model as its basis.
- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published online.
- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers.
- In addition to the annotations in GermanQuAD, haystack's distillation feature was used for training. deepset/gelectra-large-germanquad was used as the teacher model.
See URL for more details and dataset download in SQuAD format.
## Hyperparameters
## Performance
We evaluated the extractive question answering performance on our GermanQuAD test set.
Model types and training data are included in the model name.
For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.
The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on \\\\germanquad.
The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.
!performancetable
## Authors
- Timo Möller: 'timo.moeller [at] URL'
- Julian Risch: 'URL [at] URL'
- Malte Pietsch: 'malte.pietsch [at] URL'
- Michel Bartels: 'michel.bartels [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Slack | GitHub Discussions | Website
By the way: we're hiring!
|
[
"## Overview\nLanguage model: gelectra-base-germanquad-distilled \nLanguage: German \nTraining data: GermanQuAD train set (~ 12MB) \nEval data: GermanQuAD test set (~ 5MB) \nInfrastructure: 1x V100 GPU \nPublished: Apr 21st, 2021",
"## Details\n- We trained a German question answering model with a gelectra-base model as its basis.\n- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published online.\n- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers.\n- In addition to the annotations in GermanQuAD, haystack's distillation feature was used for training. deepset/gelectra-large-germanquad was used as the teacher model.\n\nSee URL for more details and dataset download in SQuAD format.",
"## Hyperparameters",
"## Performance\nWe evaluated the extractive question answering performance on our GermanQuAD test set.\nModel types and training data are included in the model name. \nFor finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.\nThe GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on \\\\\\\\germanquad. \nThe human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.\n\n!performancetable",
"## Authors\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Risch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'\n- Michel Bartels: 'michel.bartels [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #electra #question-answering #exbert #de #dataset-deepset/germanquad #license-mit #endpoints_compatible #region-us \n",
"## Overview\nLanguage model: gelectra-base-germanquad-distilled \nLanguage: German \nTraining data: GermanQuAD train set (~ 12MB) \nEval data: GermanQuAD test set (~ 5MB) \nInfrastructure: 1x V100 GPU \nPublished: Apr 21st, 2021",
"## Details\n- We trained a German question answering model with a gelectra-base model as its basis.\n- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published online.\n- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers.\n- In addition to the annotations in GermanQuAD, haystack's distillation feature was used for training. deepset/gelectra-large-germanquad was used as the teacher model.\n\nSee URL for more details and dataset download in SQuAD format.",
"## Hyperparameters",
"## Performance\nWe evaluated the extractive question answering performance on our GermanQuAD test set.\nModel types and training data are included in the model name. \nFor finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.\nThe GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on \\\\\\\\germanquad. \nThe human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.\n\n!performancetable",
"## Authors\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Risch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'\n- Michel Bartels: 'michel.bartels [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|

## Overview
**Language model:** gelectra-base-germanquad
**Language:** German
**Training data:** GermanQuAD train set (~ 12MB)
**Eval data:** GermanQuAD test set (~ 5MB)
**Infrastructure**: 1x V100 GPU
**Published**: Apr 21st, 2021
## Details
- We trained a German question answering model with a gelectra-base model as its basis.
- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad).
- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers.
See https://deepset.ai/germanquad for more details and dataset download in SQuAD format.
## Hyperparameters
```
batch_size = 24
n_epochs = 2
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
```
## Performance
We evaluated the extractive question answering performance on our GermanQuAD test set.
Model types and training data are included in the model name.
For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.
The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on [GermanQuAD](https://deepset.ai/germanquad).
The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.

## Authors
**Timo Möller:** timo.moeller@deepset.ai
**Julian Risch:** julian.risch@deepset.ai
**Malte Pietsch:** malte.pietsch@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "tags": ["exbert"], "datasets": ["deepset/germanquad"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
|
deepset/gelectra-base-germanquad
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"electra",
"question-answering",
"exbert",
"de",
"dataset:deepset/germanquad",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #safetensors #electra #question-answering #exbert #de #dataset-deepset/germanquad #license-mit #endpoints_compatible #has_space #region-us
|
!bert_image
## Overview
Language model: gelectra-base-germanquad
Language: German
Training data: GermanQuAD train set (~ 12MB)
Eval data: GermanQuAD test set (~ 5MB)
Infrastructure: 1x V100 GPU
Published: Apr 21st, 2021
## Details
- We trained a German question answering model with a gelectra-base model as its basis.
- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published online.
- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers.
See URL for more details and dataset download in SQuAD format.
## Hyperparameters
## Performance
We evaluated the extractive question answering performance on our GermanQuAD test set.
Model types and training data are included in the model name.
For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.
The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on GermanQuAD.
The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.
!performancetable
## Authors
Timo Möller: timo.moeller@URL
Julian Risch: URL@URL
Malte Pietsch: malte.pietsch@URL
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
</div>
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- Distilled roberta-base-squad2 (aka "tinyroberta-squad2")
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="URL repo and <strong><a href="URL">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="URL community open to everyone!</a></strong></p>
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"## Overview\nLanguage model: gelectra-base-germanquad \nLanguage: German \nTraining data: GermanQuAD train set (~ 12MB) \nEval data: GermanQuAD test set (~ 5MB) \nInfrastructure: 1x V100 GPU \nPublished: Apr 21st, 2021",
"## Details\n- We trained a German question answering model with a gelectra-base model as its basis.\n- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published online.\n- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers.\n\nSee URL for more details and dataset download in SQuAD format.",
"## Hyperparameters",
"## Performance\nWe evaluated the extractive question answering performance on our GermanQuAD test set.\nModel types and training data are included in the model name. \nFor finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.\nThe GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on GermanQuAD.\nThe human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth. \n!performancetable",
"## Authors\nTimo Möller: timo.moeller@URL \nJulian Risch: URL@URL \nMalte Pietsch: malte.pietsch@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #electra #question-answering #exbert #de #dataset-deepset/germanquad #license-mit #endpoints_compatible #has_space #region-us \n",
"## Overview\nLanguage model: gelectra-base-germanquad \nLanguage: German \nTraining data: GermanQuAD train set (~ 12MB) \nEval data: GermanQuAD test set (~ 5MB) \nInfrastructure: 1x V100 GPU \nPublished: Apr 21st, 2021",
"## Details\n- We trained a German question answering model with a gelectra-base model as its basis.\n- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published online.\n- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536answers, because we removed 76 wrong answers.\n\nSee URL for more details and dataset download in SQuAD format.",
"## Hyperparameters",
"## Performance\nWe evaluated the extractive question answering performance on our GermanQuAD test set.\nModel types and training data are included in the model name. \nFor finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.\nThe GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on GermanQuAD.\nThe human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth. \n!performancetable",
"## Authors\nTimo Möller: timo.moeller@URL \nJulian Risch: URL@URL \nMalte Pietsch: malte.pietsch@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
null |
transformers
|
# German ELECTRA base
Released, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model. Our evaluation suggests that this model is somewhat undertrained. For best performance from a base sized model, we recommend deepset/gbert-base
## Overview
**Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf)
**Architecture:** ELECTRA base (discriminator)
**Language:** German
## Performance
```
GermEval18 Coarse: 76.02
GermEval18 Fine: 42.22
GermEval14: 86.02
```
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Stefan Schweter: `stefan [at] schweter.eu`
Timo Möller: `timo.moeller [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "datasets": ["wikipedia", "OPUS", "OpenLegalData"]}
|
deepset/gelectra-base
| null |
[
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"de",
"dataset:wikipedia",
"dataset:OPUS",
"dataset:OpenLegalData",
"arxiv:2010.10906",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.10906"
] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #electra #pretraining #de #dataset-wikipedia #dataset-OPUS #dataset-OpenLegalData #arxiv-2010.10906 #license-mit #endpoints_compatible #has_space #region-us
|
# German ELECTRA base
Released, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model. Our evaluation suggests that this model is somewhat undertrained. For best performance from a base sized model, we recommend deepset/gbert-base
## Overview
Paper: here
Architecture: ELECTRA base (discriminator)
Language: German
## Performance
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: 'URL [at] URL'
Stefan Schweter: 'stefan [at] URL'
Timo Möller: 'timo.moeller [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Slack | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# German ELECTRA base\n\nReleased, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka \"bert-base-german-cased\") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model. Our evaluation suggests that this model is somewhat undertrained. For best performance from a base sized model, we recommend deepset/gbert-base",
"## Overview \nPaper: here \nArchitecture: ELECTRA base (discriminator)\nLanguage: German",
"## Performance \n\n\nSee also: \ndeepset/gbert-base\ndeepset/gbert-large\ndeepset/gelectra-base\ndeepset/gelectra-large\ndeepset/gelectra-base-generator\ndeepset/gelectra-large-generator",
"## Authors\nBranden Chan: 'URL [at] URL'\nStefan Schweter: 'stefan [at] URL'\nTimo Möller: 'timo.moeller [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #tf #electra #pretraining #de #dataset-wikipedia #dataset-OPUS #dataset-OpenLegalData #arxiv-2010.10906 #license-mit #endpoints_compatible #has_space #region-us \n",
"# German ELECTRA base\n\nReleased, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka \"bert-base-german-cased\") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model. Our evaluation suggests that this model is somewhat undertrained. For best performance from a base sized model, we recommend deepset/gbert-base",
"## Overview \nPaper: here \nArchitecture: ELECTRA base (discriminator)\nLanguage: German",
"## Performance \n\n\nSee also: \ndeepset/gbert-base\ndeepset/gbert-large\ndeepset/gelectra-base\ndeepset/gelectra-large\ndeepset/gelectra-base-generator\ndeepset/gelectra-large-generator",
"## Authors\nBranden Chan: 'URL [at] URL'\nStefan Schweter: 'stefan [at] URL'\nTimo Möller: 'timo.moeller [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
fill-mask
|
transformers
|
# German ELECTRA large generator
Released, Oct 2020, this is the generator component of the German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model.
The generator is useful for performing masking experiments. If you are looking for a regular language model for embedding extraction, or downstream tasks like NER, classification or QA, please use deepset/gelectra-large.
## Overview
**Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf)
**Architecture:** ELECTRA large (generator)
**Language:** German
## Performance
```
GermEval18 Coarse: 80.70
GermEval18 Fine: 55.16
GermEval14: 88.95
```
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Stefan Schweter: `stefan [at] schweter.eu`
Timo Möller: `timo.moeller [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "datasets": ["wikipedia", "OPUS", "OpenLegalData", "oscar"]}
|
deepset/gelectra-large-generator
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"electra",
"fill-mask",
"de",
"dataset:wikipedia",
"dataset:OPUS",
"dataset:OpenLegalData",
"dataset:oscar",
"arxiv:2010.10906",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.10906"
] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #safetensors #electra #fill-mask #de #dataset-wikipedia #dataset-OPUS #dataset-OpenLegalData #dataset-oscar #arxiv-2010.10906 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# German ELECTRA large generator
Released, Oct 2020, this is the generator component of the German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model.
The generator is useful for performing masking experiments. If you are looking for a regular language model for embedding extraction, or downstream tasks like NER, classification or QA, please use deepset/gelectra-large.
## Overview
Paper: here
Architecture: ELECTRA large (generator)
Language: German
## Performance
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: 'URL [at] URL'
Stefan Schweter: 'stefan [at] URL'
Timo Möller: 'timo.moeller [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Slack | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# German ELECTRA large generator\n\nReleased, Oct 2020, this is the generator component of the German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka \"bert-base-german-cased\") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model.\n\nThe generator is useful for performing masking experiments. If you are looking for a regular language model for embedding extraction, or downstream tasks like NER, classification or QA, please use deepset/gelectra-large.",
"## Overview \nPaper: here \nArchitecture: ELECTRA large (generator) \nLanguage: German",
"## Performance \n\n\nSee also: \ndeepset/gbert-base\ndeepset/gbert-large\ndeepset/gelectra-base\ndeepset/gelectra-large\ndeepset/gelectra-base-generator\ndeepset/gelectra-large-generator",
"## Authors\nBranden Chan: 'URL [at] URL'\nStefan Schweter: 'stefan [at] URL'\nTimo Möller: 'timo.moeller [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #electra #fill-mask #de #dataset-wikipedia #dataset-OPUS #dataset-OpenLegalData #dataset-oscar #arxiv-2010.10906 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# German ELECTRA large generator\n\nReleased, Oct 2020, this is the generator component of the German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka \"bert-base-german-cased\") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model.\n\nThe generator is useful for performing masking experiments. If you are looking for a regular language model for embedding extraction, or downstream tasks like NER, classification or QA, please use deepset/gelectra-large.",
"## Overview \nPaper: here \nArchitecture: ELECTRA large (generator) \nLanguage: German",
"## Performance \n\n\nSee also: \ndeepset/gbert-base\ndeepset/gbert-large\ndeepset/gelectra-base\ndeepset/gelectra-large\ndeepset/gelectra-base-generator\ndeepset/gelectra-large-generator",
"## Authors\nBranden Chan: 'URL [at] URL'\nStefan Schweter: 'stefan [at] URL'\nTimo Möller: 'timo.moeller [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|

## Overview
**Language model:** gelectra-large-germanquad
**Language:** German
**Training data:** GermanQuAD train set (~ 12MB)
**Eval data:** GermanQuAD test set (~ 5MB)
**Infrastructure**: 1x V100 GPU
**Published**: Apr 21st, 2021
## Details
- We trained a German question answering model with a gelectra-large model as its basis.
- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad).
- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536 answers, because we removed 76 wrong answers.
See https://deepset.ai/germanquad for more details and dataset download in SQuAD format.
## Hyperparameters
```
batch_size = 24
n_epochs = 2
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
```
## Performance
We evaluated the extractive question answering performance on our GermanQuAD test set.
Model types and training data are included in the model name.
For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.
The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on [GermanQuAD](https://deepset.ai/germanquad).
The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.

## Authors
**Timo Möller:** timo.moeller@deepset.ai
**Julian Risch:** julian.risch@deepset.ai
**Malte Pietsch:** malte.pietsch@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/haystack-logo-colored.svg" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://huggingface.co/spaces/deepset/README/resolve/main/deepset-logo-colored.svg" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "tags": ["exbert"], "datasets": ["deepset/germanquad"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
|
deepset/gelectra-large-germanquad
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"electra",
"question-answering",
"exbert",
"de",
"dataset:deepset/germanquad",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #safetensors #electra #question-answering #exbert #de #dataset-deepset/germanquad #license-mit #endpoints_compatible #has_space #region-us
|
!bert_image
## Overview
Language model: gelectra-large-germanquad
Language: German
Training data: GermanQuAD train set (~ 12MB)
Eval data: GermanQuAD test set (~ 5MB)
Infrastructure: 1x V100 GPU
Published: Apr 21st, 2021
## Details
- We trained a German question answering model with a gelectra-large model as its basis.
- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published online.
- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536 answers, because we removed 76 wrong answers.
See URL for more details and dataset download in SQuAD format.
## Hyperparameters
## Performance
We evaluated the extractive question answering performance on our GermanQuAD test set.
Model types and training data are included in the model name.
For finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.
The GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on GermanQuAD.
The human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.
!performancetable
## Authors
Timo Möller: timo.moeller@URL
Julian Risch: URL@URL
Malte Pietsch: malte.pietsch@URL
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
</div>
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- Distilled roberta-base-squad2 (aka "tinyroberta-squad2")
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="URL repo and <strong><a href="URL">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="URL community open to everyone!</a></strong></p>
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"## Overview\nLanguage model: gelectra-large-germanquad \nLanguage: German \nTraining data: GermanQuAD train set (~ 12MB) \nEval data: GermanQuAD test set (~ 5MB) \nInfrastructure: 1x V100 GPU \nPublished: Apr 21st, 2021",
"## Details\n- We trained a German question answering model with a gelectra-large model as its basis.\n- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published online.\n- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536 answers, because we removed 76 wrong answers.\n\nSee URL for more details and dataset download in SQuAD format.",
"## Hyperparameters",
"## Performance\nWe evaluated the extractive question answering performance on our GermanQuAD test set.\nModel types and training data are included in the model name. \nFor finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.\nThe GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on GermanQuAD. \nThe human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.\n!performancetable",
"## Authors\n Timo Möller: timo.moeller@URL \n Julian Risch: URL@URL \n Malte Pietsch: malte.pietsch@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #electra #question-answering #exbert #de #dataset-deepset/germanquad #license-mit #endpoints_compatible #has_space #region-us \n",
"## Overview\nLanguage model: gelectra-large-germanquad \nLanguage: German \nTraining data: GermanQuAD train set (~ 12MB) \nEval data: GermanQuAD test set (~ 5MB) \nInfrastructure: 1x V100 GPU \nPublished: Apr 21st, 2021",
"## Details\n- We trained a German question answering model with a gelectra-large model as its basis.\n- The dataset is GermanQuAD, a new, German language dataset, which we hand-annotated and published online.\n- The training dataset is one-way annotated and contains 11518 questions and 11518 answers, while the test dataset is three-way annotated so that there are 2204 questions and with 2204·3−76 = 6536 answers, because we removed 76 wrong answers.\n\nSee URL for more details and dataset download in SQuAD format.",
"## Hyperparameters",
"## Performance\nWe evaluated the extractive question answering performance on our GermanQuAD test set.\nModel types and training data are included in the model name. \nFor finetuning XLM-Roberta, we use the English SQuAD v2.0 dataset.\nThe GELECTRA models are warm started on the German translation of SQuAD v1.1 and finetuned on GermanQuAD. \nThe human baseline was computed for the 3-way test set by taking one answer as prediction and the other two as ground truth.\n!performancetable",
"## Authors\n Timo Möller: timo.moeller@URL \n Julian Risch: URL@URL \n Malte Pietsch: malte.pietsch@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
null |
transformers
|
# German ELECTRA large
Released, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our [paper](https://arxiv.org/pdf/2010.10906.pdf), we outline the steps taken to train our model and show that this is the state of the art German language model.
## Overview
**Paper:** [here](https://arxiv.org/pdf/2010.10906.pdf)
**Architecture:** ELECTRA large (discriminator)
**Language:** German
## Performance
```
GermEval18 Coarse: 80.70
GermEval18 Fine: 55.16
GermEval14: 88.95
```
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Stefan Schweter: `stefan [at] schweter.eu`
Timo Möller: `timo.moeller [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "de", "license": "mit", "datasets": ["wikipedia", "OPUS", "OpenLegalData", "oscar"]}
|
deepset/gelectra-large
| null |
[
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"de",
"dataset:wikipedia",
"dataset:OPUS",
"dataset:OpenLegalData",
"dataset:oscar",
"arxiv:2010.10906",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.10906"
] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #electra #pretraining #de #dataset-wikipedia #dataset-OPUS #dataset-OpenLegalData #dataset-oscar #arxiv-2010.10906 #license-mit #endpoints_compatible #has_space #region-us
|
# German ELECTRA large
Released, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka "bert-base-german-cased") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model and show that this is the state of the art German language model.
## Overview
Paper: here
Architecture: ELECTRA large (discriminator)
Language: German
## Performance
See also:
deepset/gbert-base
deepset/gbert-large
deepset/gelectra-base
deepset/gelectra-large
deepset/gelectra-base-generator
deepset/gelectra-large-generator
## Authors
Branden Chan: 'URL [at] URL'
Stefan Schweter: 'stefan [at] URL'
Timo Möller: 'timo.moeller [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# German ELECTRA large\n\nReleased, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka \"bert-base-german-cased\") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model and show that this is the state of the art German language model.",
"## Overview \nPaper: here \nArchitecture: ELECTRA large (discriminator) \nLanguage: German",
"## Performance \n\n\nSee also: \ndeepset/gbert-base\ndeepset/gbert-large\ndeepset/gelectra-base\ndeepset/gelectra-large\ndeepset/gelectra-base-generator\ndeepset/gelectra-large-generator",
"## Authors\nBranden Chan: 'URL [at] URL' \nStefan Schweter: 'stefan [at] URL' \nTimo Möller: 'timo.moeller [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #tf #electra #pretraining #de #dataset-wikipedia #dataset-OPUS #dataset-OpenLegalData #dataset-oscar #arxiv-2010.10906 #license-mit #endpoints_compatible #has_space #region-us \n",
"# German ELECTRA large\n\nReleased, Oct 2020, this is a German ELECTRA language model trained collaboratively by the makers of the original German BERT (aka \"bert-base-german-cased\") and the dbmdz BERT (aka bert-base-german-dbmdz-cased). In our paper, we outline the steps taken to train our model and show that this is the state of the art German language model.",
"## Overview \nPaper: here \nArchitecture: ELECTRA large (discriminator) \nLanguage: German",
"## Performance \n\n\nSee also: \ndeepset/gbert-base\ndeepset/gbert-large\ndeepset/gelectra-base\ndeepset/gelectra-large\ndeepset/gelectra-base-generator\ndeepset/gelectra-large-generator",
"## Authors\nBranden Chan: 'URL [at] URL' \nStefan Schweter: 'stefan [at] URL' \nTimo Möller: 'timo.moeller [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
# MiniLM-L12-H384-uncased for QA
## Overview
**Language model:** microsoft/MiniLM-L12-H384-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See an [example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/01_basic_qa_pipeline)
**Infrastructure**: 1x Tesla v100
## Hyperparameters
```
seed=42
batch_size = 12
n_epochs = 4
base_LM_model = "microsoft/MiniLM-L12-H384-uncased"
max_seq_len = 384
learning_rate = 4e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
grad_acc_steps=4
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 76.13071675229513,
"f1": 79.49786500219953,
"total": 11873,
"HasAns_exact": 78.35695006747639,
"HasAns_f1": 85.10090269418276,
"HasAns_total": 5928,
"NoAns_exact": 73.91084945332211,
"NoAns_f1": 73.91084945332211,
"NoAns_total": 5945
```
## Usage
### In Haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/minilm-uncased-squad2")
# or
reader = TransformersReader(model="deepset/minilm-uncased-squad2",tokenizer="deepset/minilm-uncased-squad2")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/minilm-uncased-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
**Vaishali Pal:** vaishali.pal@deepset.ai
**Branden Chan:** branden.chan@deepset.ai
**Timo Möller:** timo.moeller@deepset.ai
**Malte Pietsch:** malte.pietsch@deepset.ai
**Tanay Soni:** tanay.soni@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/minilm-uncased-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 76.1921, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmViZTQ3YTBjYTc3ZDQzYmI1Mzk3MTAxM2MzNjdmMTc0MWY4Yzg2MWU3NGQ1MDJhZWI2NzY0YWYxZTY2OTgzMiIsInZlcnNpb24iOjF9.s4XCRs_pvW__LJ57dpXAEHD6NRsQ3XaFrM1xaguS6oUs5fCN77wNNc97scnfoPXT18A8RAn0cLTNivfxZm0oBA"}, {"type": "f1", "value": 79.5483, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmJlYTIyOTg2NjMyMzg4NzNlNGIzMTY2NDVkMjg0ODdiOWRmYjVkZDYyZjBjNWNiNTBhNjcwOWUzMDM4ZWJiZiIsInZlcnNpb24iOjF9.gxpwIBBA3_5xPi-TaZcqWNnGgCiHzxaUNgrS2jucxoVWGxhBtnPdwKVCxLleQoDDZenAXB3Yh71zMP3xTSeHCw"}]}]}]}
|
deepset/minilm-uncased-squad2
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #safetensors #bert #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us
|
# MiniLM-L12-H384-uncased for QA
## Overview
Language model: microsoft/MiniLM-L12-H384-uncased
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
Code: See an example QA pipeline on Haystack
Infrastructure: 1x Tesla v100
## Hyperparameters
## Performance
Evaluated on the SQuAD 2.0 dev set with the official eval script.
## Usage
### In Haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in Haystack:
### In Transformers
## Authors
Vaishali Pal: URL@URL
Branden Chan: URL@URL
Timo Möller: timo.moeller@URL
Malte Pietsch: malte.pietsch@URL
Tanay Soni: URL@URL
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
</div>
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- Distilled roberta-base-squad2 (aka "tinyroberta-squad2")
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="URL repo and <strong><a href="URL">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="URL community open to everyone!</a></strong></p>
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# MiniLM-L12-H384-uncased for QA",
"## Overview\nLanguage model: microsoft/MiniLM-L12-H384-uncased \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See an example QA pipeline on Haystack\nInfrastructure: 1x Tesla v100",
"## Hyperparameters",
"## Performance\nEvaluated on the SQuAD 2.0 dev set with the official eval script.",
"## Usage",
"### In Haystack\nFor doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in Haystack:",
"### In Transformers",
"## Authors\nVaishali Pal: URL@URL \nBranden Chan: URL@URL \nTimo Möller: timo.moeller@URL \nMalte Pietsch: malte.pietsch@URL \nTanay Soni: URL@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #bert #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# MiniLM-L12-H384-uncased for QA",
"## Overview\nLanguage model: microsoft/MiniLM-L12-H384-uncased \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See an example QA pipeline on Haystack\nInfrastructure: 1x Tesla v100",
"## Hyperparameters",
"## Performance\nEvaluated on the SQuAD 2.0 dev set with the official eval script.",
"## Usage",
"### In Haystack\nFor doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in Haystack:",
"### In Transformers",
"## Authors\nVaishali Pal: URL@URL \nBranden Chan: URL@URL \nTimo Möller: timo.moeller@URL \nMalte Pietsch: malte.pietsch@URL \nTanay Soni: URL@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
feature-extraction
|
transformers
|
This language model is trained using sentence_transformers (https://github.com/UKPLab/sentence-transformers)
Started with bert-base-nli-stsb-mean-tokens
Continue training on quora questions deduplication dataset (https://www.kaggle.com/c/quora-question-pairs)
See train_script.py for script used
Below is the performance over the course of training
epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
0,1000,0.5944576426835938,0.6010801382777033,0.5942803776859142,0.5934485776801595,0.5939676679774666,0.593162725602328,0.5905591590826669,0.5921674789994058
0,2000,0.6404080440207146,0.6416811632113405,0.6384419354012121,0.6352050423100778,0.6379917744471867,0.6347884067391001,0.6410544760582826,0.6379252046791412
0,3000,0.6710168301884945,0.6676529324662036,0.6660195209784969,0.6618423144808695,0.6656461098096684,0.6615366331956389,0.6724401903484759,0.666073727723655
0,4000,0.6886373265097949,0.6808948140300153,0.67907655686838,0.6714218133850957,0.6786809551564443,0.6711577956884357,0.6926435869763303,0.68190855298609
0,5000,0.6991409753700026,0.6919630610321864,0.6991041519437052,0.6868961486499775,0.6987076032270729,0.6865385550504007,0.7035518148330993,0.6916275246101342
0,6000,0.7120367327025509,0.6975005265298305,0.7065567493967201,0.6922375503495235,0.7060005509843024,0.6916475765570651,0.7147094303373102,0.6981390706722722
0,7000,0.7254672394728687,0.7130118465900485,0.7261844956277705,0.7086213543110718,0.7257479964972307,0.7079315661881832,0.728729909455115,0.7122743793160531
0,8000,0.7402421930101399,0.7216774208330149,0.7367901914441078,0.7166256588352043,0.7362607046874481,0.7158881916281887,0.7433902441373252,0.7220998491980078
0,9000,0.7381005358120434,0.7197216844469877,0.7343228719349923,0.7139462687943793,0.7345247569255238,0.7145106206467152,0.7421843672419275,0.720686853053079
0,10000,0.7465436564646095,0.7260327107480364,0.7467524239596304,0.7230195666847953,0.7467721566237211,0.7231367593302213,0.749792199122442,0.7263143296580317
0,11000,0.7521805421706547,0.7323771570146701,0.7530672061250105,0.729223203496722,0.7530616532823367,0.7293818369675622,0.7552399002305836,0.7320808333541338
0,12000,0.7579359969644401,0.7340677616737238,0.7570017235719905,0.7305965412825544,0.7570601853520393,0.730718189957289,0.7611254136080384,0.7351501229591327
0,-1,0.7573407371218097,0.7329952035782198,0.755595312163209,0.7291445551777086,0.7557737117990928,0.7295404703700227,0.7607276219361719,0.7342415455980179
1,1000,0.7619907683805341,0.7374667949734767,0.7629820517114324,0.7330364216044966,0.7628369522755882,0.7331912674450544,0.7658583898073758,0.7381503446695727
1,2000,0.7618972640071228,0.7362151058969478,0.764582212425539,0.7335856230046062,0.7643125513700815,0.7334501607097152,0.7652852805583232,0.7369104639809163
1,3000,0.7687362955240467,0.7404674623181671,0.7708304819979073,0.7380959815601529,0.7707835692712482,0.7379796800453193,0.772074854759756,0.7414513460702766
1,4000,0.7685047787908202,0.7403088288815168,0.7703522257474043,0.7379787888808298,0.7701221475099808,0.7377898546753812,0.7713755359045312,0.7409415801952219
1,5000,0.7696438109797803,0.7410393893292365,0.773270389327895,0.7392953127251652,0.7729880866533291,0.7389853982789335,0.7726236305835863,0.7416278035580925
1,6000,0.7749538363837081,0.7436499342062207,0.774879168058157,0.7401827241766746,0.7745754601165837,0.739763415043146,0.7788801166152383,0.7446249060022169
1,7000,0.7794560817870597,0.7480970176267153,0.7803506944510302,0.7453305130502859,0.7799867949176531,0.7447100155494814,0.7828208193123926,0.7486740690324809
1,8000,0.7855844359073243,0.7496742172376921,0.7828816645965887,0.747176409009761,0.7827584875358967,0.7471037762845532,0.7879159073496309,0.7507349669102151
1,9000,0.7844110753729492,0.7507746252693759,0.7847208586489722,0.7485172180290892,0.7846408087474059,0.748491818820158,0.7872061334510225,0.7514470349769437
1,10000,0.7881311227435004,0.7530048509727403,0.7886917756879734,0.7508018068765787,0.7883332502188707,0.7505037008187275,0.7910707228932787,0.7537200382362567
1,11000,0.7883300109606874,0.7513494487126553,0.7879329130497712,0.749818368689255,0.7876525616593218,0.7494872882301785,0.7911454269743292,0.7522843165147303
1,12000,0.7853334933336618,0.7516809747712728,0.7893895316714998,0.749780492728257,0.7890075986655403,0.7494079715118533,0.7885959664070629,0.7523827940133203
1,-1,0.7887529238148887,0.7534076729932393,0.7896864404801204,0.7513080079201105,0.7894077512343298,0.7510009899066772,0.7919617393746149,0.7542173273241598
2,1000,0.7919209063905188,0.7550167329363414,0.7917464066515253,0.7523043685293455,0.7914371703225378,0.7520285423781206,0.7950297421784158,0.7562599556207076
2,2000,0.7924507768792486,0.7542908512484463,0.7934519001953887,0.7517491515010692,0.7931885648751081,0.751521004535999,0.7951637852162545,0.7551495215642072
2,3000,0.7937606244038364,0.755599577136169,0.7933633347508111,0.7527922999916203,0.7931581019714242,0.7527132061436363,0.797275652800117,0.7569827180764233
2,4000,0.7938389298721445,0.7578716892320315,0.7963783770097079,0.7555928931784702,0.796150381773947,0.7555438771581088,0.7972911620482322,0.759178632650707
2,5000,0.7935330563129844,0.7551129824372304,0.7970775059297484,0.7527285792572385,0.7967359830546507,0.7524478515463257,0.7966395126138969,0.756319220359678
2,6000,0.7929852776759999,0.7525490026774382,0.7952484474454824,0.7503695753216607,0.7950784132079611,0.7503677929234961,0.7956152082976395,0.7535275392698093
2,7000,0.794956504054517,0.756119591765251,0.7982025041673655,0.7532521587180684,0.7980261618830962,0.7532107179960499,0.7983222918908033,0.7571226363678287
2,8000,0.7934568432535339,0.7538336661192452,0.797015698241178,0.7514773358161916,0.7968076980315735,0.7513458838811067,0.7960694134685949,0.754143803399873
2,9000,0.7970040626682157,0.7576497805894974,0.7987855332059015,0.7550996144509958,0.7984693921009676,0.7548260162973456,0.7999509314900626,0.758347143906916
2,10000,0.7979442987735523,0.7585338500791028,0.8018677081664496,0.7557412777548302,0.8015397301245205,0.7552916678886369,0.8007921348414564,0.7589772216225288
2,11000,0.7985519561040211,0.7579986850302035,0.8021236875460913,0.7555826443181872,0.8019861620475348,0.7553763317660516,0.8009230128897853,0.7586541619907702
2,12000,0.7986842143860736,0.7599570950134775,0.8029131054823838,0.7577678644678973,0.8027922603736795,0.7575152095990927,0.8020896747930555,0.7608540869254408
2,-1,0.7994135319568432,0.7596286881516635,0.8022087183675333,0.7570593611974978,0.8020218401019292,0.7567291719729909,0.8026346812258125,0.7603928913647044
3,1000,0.7985505039929134,0.7592588405681144,0.8023296699449267,0.7569345933969436,0.8023622066009718,0.7570237132696928,0.8013054275981851,0.759643838536062
3,2000,0.7995482191699455,0.759205368623176,0.8026859405513612,0.7565709841358819,0.8024845263367439,0.7562920388231202,0.8021318586127523,0.7596496313300967
3,3000,0.7991070423195897,0.7582027696555826,0.8016352550470427,0.7555585819429662,0.8014268261947898,0.7551838327642736,0.8013136081494014,0.7584429477727118
3,4000,0.7999188836884763,0.7586764419322649,0.802987646214278,0.7561111254802977,0.8026549791861386,0.7556463650525692,0.8024068858366156,0.7591238238715613
3,5000,0.7988075932525881,0.7583533823004922,0.8019498750207454,0.755792967372457,0.8016459824731964,0.7553834613587099,0.8015528810821693,0.7589527136833425
3,6000,0.8003341798460688,0.7585432077405799,0.8032464035902267,0.7563722467405277,0.8028695045742804,0.7557626665682309,0.8027937010871594,0.7590404967573696
3,7000,0.799187592384933,0.7579358555659604,0.8028413548398412,0.7555875459131398,0.8025187078191003,0.7551196665011402,0.8018680475193432,0.7585565756912578
3,8000,0.797725037202641,0.757439012042047,0.802048241301358,0.7548888458326453,0.8017608103042271,0.7544606246736175,0.8005479449399782,0.758037452190282
3,9000,0.7990232649360067,0.7573703896772077,0.8021375332910405,0.754873027155089,0.8018733796679427,0.7545680141630304,0.8016400687760605,0.7579461042843499
3,10000,0.7994934439260372,0.758368978248884,0.8035693504115055,0.75619400688862,0.8032990505007025,0.7559016935896375,0.8022819185772518,0.7589558328445544
3,11000,0.8002954591825011,0.758710753096932,0.8043310859792212,0.7566387152306694,0.8040865016706966,0.7564221538891368,0.8030873114870971,0.7592722085543488
3,12000,0.8003726616196549,0.7588056657991931,0.8044000317617518,0.7566146528909147,0.8041705213966136,0.7563419459362758,0.8031760015719815,0.7593194421057111
3,-1,0.8004926728141455,0.7587192194882135,0.8043340929890026,0.756546030526114,0.8041028559910275,0.7563103085106637,0.8032542493776693,0.7592325501951863
|
{"license": "apache-2.0"}
|
deepset/quora_dedup_bert_base
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"feature-extraction",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #safetensors #bert #feature-extraction #license-apache-2.0 #endpoints_compatible #region-us
|
This language model is trained using sentence_transformers (URL
Started with bert-base-nli-stsb-mean-tokens
Continue training on quora questions deduplication dataset (URL
See train_script.py for script used
Below is the performance over the course of training
epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
0,1000,0.5944576426835938,0.6010801382777033,0.5942803776859142,0.5934485776801595,0.5939676679774666,0.593162725602328,0.5905591590826669,0.5921674789994058
0,2000,0.6404080440207146,0.6416811632113405,0.6384419354012121,0.6352050423100778,0.6379917744471867,0.6347884067391001,0.6410544760582826,0.6379252046791412
0,3000,0.6710168301884945,0.6676529324662036,0.6660195209784969,0.6618423144808695,0.6656461098096684,0.6615366331956389,0.6724401903484759,0.666073727723655
0,4000,0.6886373265097949,0.6808948140300153,0.67907655686838,0.6714218133850957,0.6786809551564443,0.6711577956884357,0.6926435869763303,0.68190855298609
0,5000,0.6991409753700026,0.6919630610321864,0.6991041519437052,0.6868961486499775,0.6987076032270729,0.6865385550504007,0.7035518148330993,0.6916275246101342
0,6000,0.7120367327025509,0.6975005265298305,0.7065567493967201,0.6922375503495235,0.7060005509843024,0.6916475765570651,0.7147094303373102,0.6981390706722722
0,7000,0.7254672394728687,0.7130118465900485,0.7261844956277705,0.7086213543110718,0.7257479964972307,0.7079315661881832,0.728729909455115,0.7122743793160531
0,8000,0.7402421930101399,0.7216774208330149,0.7367901914441078,0.7166256588352043,0.7362607046874481,0.7158881916281887,0.7433902441373252,0.7220998491980078
0,9000,0.7381005358120434,0.7197216844469877,0.7343228719349923,0.7139462687943793,0.7345247569255238,0.7145106206467152,0.7421843672419275,0.720686853053079
0,10000,0.7465436564646095,0.7260327107480364,0.7467524239596304,0.7230195666847953,0.7467721566237211,0.7231367593302213,0.749792199122442,0.7263143296580317
0,11000,0.7521805421706547,0.7323771570146701,0.7530672061250105,0.729223203496722,0.7530616532823367,0.7293818369675622,0.7552399002305836,0.7320808333541338
0,12000,0.7579359969644401,0.7340677616737238,0.7570017235719905,0.7305965412825544,0.7570601853520393,0.730718189957289,0.7611254136080384,0.7351501229591327
0,-1,0.7573407371218097,0.7329952035782198,0.755595312163209,0.7291445551777086,0.7557737117990928,0.7295404703700227,0.7607276219361719,0.7342415455980179
1,1000,0.7619907683805341,0.7374667949734767,0.7629820517114324,0.7330364216044966,0.7628369522755882,0.7331912674450544,0.7658583898073758,0.7381503446695727
1,2000,0.7618972640071228,0.7362151058969478,0.764582212425539,0.7335856230046062,0.7643125513700815,0.7334501607097152,0.7652852805583232,0.7369104639809163
1,3000,0.7687362955240467,0.7404674623181671,0.7708304819979073,0.7380959815601529,0.7707835692712482,0.7379796800453193,0.772074854759756,0.7414513460702766
1,4000,0.7685047787908202,0.7403088288815168,0.7703522257474043,0.7379787888808298,0.7701221475099808,0.7377898546753812,0.7713755359045312,0.7409415801952219
1,5000,0.7696438109797803,0.7410393893292365,0.773270389327895,0.7392953127251652,0.7729880866533291,0.7389853982789335,0.7726236305835863,0.7416278035580925
1,6000,0.7749538363837081,0.7436499342062207,0.774879168058157,0.7401827241766746,0.7745754601165837,0.739763415043146,0.7788801166152383,0.7446249060022169
1,7000,0.7794560817870597,0.7480970176267153,0.7803506944510302,0.7453305130502859,0.7799867949176531,0.7447100155494814,0.7828208193123926,0.7486740690324809
1,8000,0.7855844359073243,0.7496742172376921,0.7828816645965887,0.747176409009761,0.7827584875358967,0.7471037762845532,0.7879159073496309,0.7507349669102151
1,9000,0.7844110753729492,0.7507746252693759,0.7847208586489722,0.7485172180290892,0.7846408087474059,0.748491818820158,0.7872061334510225,0.7514470349769437
1,10000,0.7881311227435004,0.7530048509727403,0.7886917756879734,0.7508018068765787,0.7883332502188707,0.7505037008187275,0.7910707228932787,0.7537200382362567
1,11000,0.7883300109606874,0.7513494487126553,0.7879329130497712,0.749818368689255,0.7876525616593218,0.7494872882301785,0.7911454269743292,0.7522843165147303
1,12000,0.7853334933336618,0.7516809747712728,0.7893895316714998,0.749780492728257,0.7890075986655403,0.7494079715118533,0.7885959664070629,0.7523827940133203
1,-1,0.7887529238148887,0.7534076729932393,0.7896864404801204,0.7513080079201105,0.7894077512343298,0.7510009899066772,0.7919617393746149,0.7542173273241598
2,1000,0.7919209063905188,0.7550167329363414,0.7917464066515253,0.7523043685293455,0.7914371703225378,0.7520285423781206,0.7950297421784158,0.7562599556207076
2,2000,0.7924507768792486,0.7542908512484463,0.7934519001953887,0.7517491515010692,0.7931885648751081,0.751521004535999,0.7951637852162545,0.7551495215642072
2,3000,0.7937606244038364,0.755599577136169,0.7933633347508111,0.7527922999916203,0.7931581019714242,0.7527132061436363,0.797275652800117,0.7569827180764233
2,4000,0.7938389298721445,0.7578716892320315,0.7963783770097079,0.7555928931784702,0.796150381773947,0.7555438771581088,0.7972911620482322,0.759178632650707
2,5000,0.7935330563129844,0.7551129824372304,0.7970775059297484,0.7527285792572385,0.7967359830546507,0.7524478515463257,0.7966395126138969,0.756319220359678
2,6000,0.7929852776759999,0.7525490026774382,0.7952484474454824,0.7503695753216607,0.7950784132079611,0.7503677929234961,0.7956152082976395,0.7535275392698093
2,7000,0.794956504054517,0.756119591765251,0.7982025041673655,0.7532521587180684,0.7980261618830962,0.7532107179960499,0.7983222918908033,0.7571226363678287
2,8000,0.7934568432535339,0.7538336661192452,0.797015698241178,0.7514773358161916,0.7968076980315735,0.7513458838811067,0.7960694134685949,0.754143803399873
2,9000,0.7970040626682157,0.7576497805894974,0.7987855332059015,0.7550996144509958,0.7984693921009676,0.7548260162973456,0.7999509314900626,0.758347143906916
2,10000,0.7979442987735523,0.7585338500791028,0.8018677081664496,0.7557412777548302,0.8015397301245205,0.7552916678886369,0.8007921348414564,0.7589772216225288
2,11000,0.7985519561040211,0.7579986850302035,0.8021236875460913,0.7555826443181872,0.8019861620475348,0.7553763317660516,0.8009230128897853,0.7586541619907702
2,12000,0.7986842143860736,0.7599570950134775,0.8029131054823838,0.7577678644678973,0.8027922603736795,0.7575152095990927,0.8020896747930555,0.7608540869254408
2,-1,0.7994135319568432,0.7596286881516635,0.8022087183675333,0.7570593611974978,0.8020218401019292,0.7567291719729909,0.8026346812258125,0.7603928913647044
3,1000,0.7985505039929134,0.7592588405681144,0.8023296699449267,0.7569345933969436,0.8023622066009718,0.7570237132696928,0.8013054275981851,0.759643838536062
3,2000,0.7995482191699455,0.759205368623176,0.8026859405513612,0.7565709841358819,0.8024845263367439,0.7562920388231202,0.8021318586127523,0.7596496313300967
3,3000,0.7991070423195897,0.7582027696555826,0.8016352550470427,0.7555585819429662,0.8014268261947898,0.7551838327642736,0.8013136081494014,0.7584429477727118
3,4000,0.7999188836884763,0.7586764419322649,0.802987646214278,0.7561111254802977,0.8026549791861386,0.7556463650525692,0.8024068858366156,0.7591238238715613
3,5000,0.7988075932525881,0.7583533823004922,0.8019498750207454,0.755792967372457,0.8016459824731964,0.7553834613587099,0.8015528810821693,0.7589527136833425
3,6000,0.8003341798460688,0.7585432077405799,0.8032464035902267,0.7563722467405277,0.8028695045742804,0.7557626665682309,0.8027937010871594,0.7590404967573696
3,7000,0.799187592384933,0.7579358555659604,0.8028413548398412,0.7555875459131398,0.8025187078191003,0.7551196665011402,0.8018680475193432,0.7585565756912578
3,8000,0.797725037202641,0.757439012042047,0.802048241301358,0.7548888458326453,0.8017608103042271,0.7544606246736175,0.8005479449399782,0.758037452190282
3,9000,0.7990232649360067,0.7573703896772077,0.8021375332910405,0.754873027155089,0.8018733796679427,0.7545680141630304,0.8016400687760605,0.7579461042843499
3,10000,0.7994934439260372,0.758368978248884,0.8035693504115055,0.75619400688862,0.8032990505007025,0.7559016935896375,0.8022819185772518,0.7589558328445544
3,11000,0.8002954591825011,0.758710753096932,0.8043310859792212,0.7566387152306694,0.8040865016706966,0.7564221538891368,0.8030873114870971,0.7592722085543488
3,12000,0.8003726616196549,0.7588056657991931,0.8044000317617518,0.7566146528909147,0.8041705213966136,0.7563419459362758,0.8031760015719815,0.7593194421057111
3,-1,0.8004926728141455,0.7587192194882135,0.8043340929890026,0.756546030526114,0.8041028559910275,0.7563103085106637,0.8032542493776693,0.7592325501951863
|
[] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #bert #feature-extraction #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
question-answering
|
transformers
|
# roberta-base-squad2 for QA on COVID-19
## Overview
**Language model:** deepset/roberta-base-squad2
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** [SQuAD-style CORD-19 annotations from 23rd April](https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/200423_covidQA.json)
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/01_basic_qa_pipeline)
**Infrastructure**: Tesla v100
## Hyperparameters
```
batch_size = 24
n_epochs = 3
base_LM_model = "deepset/roberta-base-squad2"
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.1
doc_stride = 128
xval_folds = 5
dev_split = 0
no_ans_boost = -100
```
---
license: cc-by-4.0
---
## Performance
5-fold cross-validation on the data set led to the following results:
**Single EM-Scores:** [0.222, 0.123, 0.234, 0.159, 0.158]
**Single F1-Scores:** [0.476, 0.493, 0.599, 0.461, 0.465]
**Single top\\_3\\_recall Scores:** [0.827, 0.776, 0.860, 0.771, 0.777]
**XVAL EM:** 0.17890995260663506
**XVAL f1:** 0.49925444207319924
**XVAL top\\_3\\_recall:** 0.8021327014218009
This model is the model obtained from the **third** fold of the cross-validation.
## Usage
### In Haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2-covid")
# or
reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2-covid")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2-covid"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
**Branden Chan:** branden.chan@deepset.ai
**Timo Möller:** timo.moeller@deepset.ai
**Malte Pietsch:** malte.pietsch@deepset.ai
**Tanay Soni:** tanay.soni@deepset.ai
**Bogdan Kostić:** bogdan.kostic@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"]}
|
deepset/roberta-base-squad2-covid
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #safetensors #roberta #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #endpoints_compatible #has_space #region-us
|
# roberta-base-squad2 for QA on COVID-19
## Overview
Language model: deepset/roberta-base-squad2
Language: English
Downstream-task: Extractive QA
Training data: SQuAD-style CORD-19 annotations from 23rd April
Code: See an example QA pipeline on Haystack
Infrastructure: Tesla v100
## Hyperparameters
---
license: cc-by-4.0
---
## Performance
5-fold cross-validation on the data set led to the following results:
Single EM-Scores: [0.222, 0.123, 0.234, 0.159, 0.158]
Single F1-Scores: [0.476, 0.493, 0.599, 0.461, 0.465]
Single top\\_3\\_recall Scores: [0.827, 0.776, 0.860, 0.771, 0.777]
XVAL EM: 0.17890995260663506
XVAL f1: 0.49925444207319924
XVAL top\\_3\\_recall: 0.8021327014218009
This model is the model obtained from the third fold of the cross-validation.
## Usage
### In Haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:
### In Transformers
## Authors
Branden Chan: URL@URL
Timo Möller: timo.moeller@URL
Malte Pietsch: malte.pietsch@URL
Tanay Soni: URL@URL
Bogdan Kostić: URL@URL
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
</div>
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- Distilled roberta-base-squad2 (aka "tinyroberta-squad2")
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="URL repo and <strong><a href="URL">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="URL community open to everyone!</a></strong></p>
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# roberta-base-squad2 for QA on COVID-19",
"## Overview\nLanguage model: deepset/roberta-base-squad2 \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD-style CORD-19 annotations from 23rd April \nCode: See an example QA pipeline on Haystack \nInfrastructure: Tesla v100",
"## Hyperparameters\n\n---\nlicense: cc-by-4.0\n---",
"## Performance\n5-fold cross-validation on the data set led to the following results: \n\nSingle EM-Scores: [0.222, 0.123, 0.234, 0.159, 0.158] \nSingle F1-Scores: [0.476, 0.493, 0.599, 0.461, 0.465] \nSingle top\\\\_3\\\\_recall Scores: [0.827, 0.776, 0.860, 0.771, 0.777] \nXVAL EM: 0.17890995260663506 \nXVAL f1: 0.49925444207319924 \nXVAL top\\\\_3\\\\_recall: 0.8021327014218009\n\nThis model is the model obtained from the third fold of the cross-validation.",
"## Usage",
"### In Haystack\nFor doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:",
"### In Transformers",
"## Authors\nBranden Chan: URL@URL \nTimo Möller: timo.moeller@URL \nMalte Pietsch: malte.pietsch@URL \nTanay Soni: URL@URL \nBogdan Kostić: URL@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #roberta #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #endpoints_compatible #has_space #region-us \n",
"# roberta-base-squad2 for QA on COVID-19",
"## Overview\nLanguage model: deepset/roberta-base-squad2 \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD-style CORD-19 annotations from 23rd April \nCode: See an example QA pipeline on Haystack \nInfrastructure: Tesla v100",
"## Hyperparameters\n\n---\nlicense: cc-by-4.0\n---",
"## Performance\n5-fold cross-validation on the data set led to the following results: \n\nSingle EM-Scores: [0.222, 0.123, 0.234, 0.159, 0.158] \nSingle F1-Scores: [0.476, 0.493, 0.599, 0.461, 0.465] \nSingle top\\\\_3\\\\_recall Scores: [0.827, 0.776, 0.860, 0.771, 0.777] \nXVAL EM: 0.17890995260663506 \nXVAL f1: 0.49925444207319924 \nXVAL top\\\\_3\\\\_recall: 0.8021327014218009\n\nThis model is the model obtained from the third fold of the cross-validation.",
"## Usage",
"### In Haystack\nFor doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:",
"### In Transformers",
"## Authors\nBranden Chan: URL@URL \nTimo Möller: timo.moeller@URL \nMalte Pietsch: malte.pietsch@URL \nTanay Soni: URL@URL \nBogdan Kostić: URL@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
## Overview
**Language model:** deepset/roberta-base-squad2-distilled
**Language:** English
**Training data:** SQuAD 2.0 training set
**Eval data:** SQuAD 2.0 dev set
**Infrastructure**: 4x V100 GPU
**Published**: Dec 8th, 2021
## Details
- haystack's distillation feature was used for training. deepset/roberta-large-squad2 was used as the teacher model.
## Hyperparameters
```
batch_size = 80
n_epochs = 4
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 1.5
distillation_loss_weight = 0.75
```
## Performance
```
"exact": 79.8366040596311
"f1": 83.916407079888
```
## Authors
**Timo Möller:** timo.moeller@deepset.ai
**Julian Risch:** julian.risch@deepset.ai
**Malte Pietsch:** malte.pietsch@deepset.ai
**Michel Bartels:** michel.bartels@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["squad_v2"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg", "model-index": [{"name": "deepset/roberta-base-squad2-distilled", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 80.8593, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzVjNzkxNmNiNDkzNzdiYjJjZGM3ZTViMGJhOGM2ZjFmYjg1MjYxMDM2YzM5NWMwNDIyYzNlN2QwNGYyNDMzZSIsInZlcnNpb24iOjF9.Rgww8tf8D7nF2dh2U_DMrFzmp87k8s7RFibrDXSvQyA66PGWXwjlsd1552lzjHnNV5hvHUM1-h3PTuY_5p64BA"}, {"type": "f1", "value": 84.0104, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTAyZDViNWYzNjA4OWQ5MzgyYmQ2ZDlhNWRhMTIzYTYxYzViMmI4NWE4ZGU5MzVhZTAwNTRlZmRlNWUwMjI0ZSIsInZlcnNpb24iOjF9.Er21BNgJ3jJXLuZtpubTYq9wCwO1i_VLQFwS5ET0e4eAYVVj0aOA40I5FvP5pZac3LjkCnVacxzsFWGCYVmnDA"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 86.225, "name": "Exact Match"}, {"type": "f1", "value": 92.483, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "adversarial_qa", "type": "adversarial_qa", "config": "adversarialQA", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 29.9, "name": "Exact Match"}, {"type": "f1", "value": 41.183, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_adversarial", "type": "squad_adversarial", "config": "AddOneSent", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 79.071, "name": "Exact Match"}, {"type": "f1", "value": 84.472, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts amazon", "type": "squadshifts", "config": "amazon", "split": "test"}, "metrics": [{"type": "exact_match", "value": 70.733, "name": "Exact Match"}, {"type": "f1", "value": 83.958, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts new_wiki", "type": "squadshifts", "config": "new_wiki", "split": "test"}, "metrics": [{"type": "exact_match", "value": 82.011, "name": "Exact Match"}, {"type": "f1", "value": 91.092, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts nyt", "type": "squadshifts", "config": "nyt", "split": "test"}, "metrics": [{"type": "exact_match", "value": 84.203, "name": "Exact Match"}, {"type": "f1", "value": 91.521, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts reddit", "type": "squadshifts", "config": "reddit", "split": "test"}, "metrics": [{"type": "exact_match", "value": 72.029, "name": "Exact Match"}, {"type": "f1", "value": 83.454, "name": "F1"}]}]}]}
|
deepset/roberta-base-squad2-distilled
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"question-answering",
"exbert",
"en",
"dataset:squad_v2",
"license:mit",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #question-answering #exbert #en #dataset-squad_v2 #license-mit #model-index #endpoints_compatible #has_space #region-us
|
## Overview
Language model: deepset/roberta-base-squad2-distilled
Language: English
Training data: SQuAD 2.0 training set
Eval data: SQuAD 2.0 dev set
Infrastructure: 4x V100 GPU
Published: Dec 8th, 2021
## Details
- haystack's distillation feature was used for training. deepset/roberta-large-squad2 was used as the teacher model.
## Hyperparameters
## Performance
## Authors
Timo Möller: timo.moeller@URL
Julian Risch: URL@URL
Malte Pietsch: malte.pietsch@URL
Michel Bartels: michel.bartels@URL
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
</div>
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- Distilled roberta-base-squad2 (aka "tinyroberta-squad2")
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="URL repo and <strong><a href="URL">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="URL community open to everyone!</a></strong></p>
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"## Overview\nLanguage model: deepset/roberta-base-squad2-distilled \nLanguage: English \nTraining data: SQuAD 2.0 training set\nEval data: SQuAD 2.0 dev set\nInfrastructure: 4x V100 GPU \nPublished: Dec 8th, 2021",
"## Details\n- haystack's distillation feature was used for training. deepset/roberta-large-squad2 was used as the teacher model.",
"## Hyperparameters",
"## Performance",
"## Authors\nTimo Möller: timo.moeller@URL \nJulian Risch: URL@URL \nMalte Pietsch: malte.pietsch@URL \nMichel Bartels: michel.bartels@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #question-answering #exbert #en #dataset-squad_v2 #license-mit #model-index #endpoints_compatible #has_space #region-us \n",
"## Overview\nLanguage model: deepset/roberta-base-squad2-distilled \nLanguage: English \nTraining data: SQuAD 2.0 training set\nEval data: SQuAD 2.0 dev set\nInfrastructure: 4x V100 GPU \nPublished: Dec 8th, 2021",
"## Details\n- haystack's distillation feature was used for training. deepset/roberta-large-squad2 was used as the teacher model.",
"## Hyperparameters",
"## Performance",
"## Authors\nTimo Möller: timo.moeller@URL \nJulian Risch: URL@URL \nMalte Pietsch: malte.pietsch@URL \nMichel Bartels: michel.bartels@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
# roberta-base for QA
This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 2
base_LM_model = "roberta-base"
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Using a distilled model instead
Please note that we have also released a distilled version of this model called [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2). The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```
For a complete example of ``roberta-base-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.87029394424324,
"f1": 82.91251169582613,
"total": 11873,
"HasAns_exact": 77.93522267206478,
"HasAns_f1": 84.02838248389763,
"HasAns_total": 5928,
"NoAns_exact": 81.79983179142137,
"NoAns_f1": 81.79983179142137,
"NoAns_total": 5945
```
## Authors
**Branden Chan:** branden.chan@deepset.ai
**Timo Möller:** timo.moeller@deepset.ai
**Malte Pietsch:** malte.pietsch@deepset.ai
**Tanay Soni:** tanay.soni@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/roberta-base-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 79.9309, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhhNjg5YzNiZGQ1YTIyYTAwZGUwOWEzZTRiYzdjM2QzYjA3ZTUxNDM1NjE1MTUyMjE1MGY1YzEzMjRjYzVjYiIsInZlcnNpb24iOjF9.EH5JJo8EEFwU7osPz3s7qanw_tigeCFhCXjSfyN0Y1nWVnSfulSxIk_DbAEI5iE80V4EKLyp5-mYFodWvL2KDA"}, {"type": "f1", "value": 82.9501, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjk5ZDYwOGQyNjNkMWI0OTE4YzRmOTlkY2JjNjQ0YTZkNTMzMzNkYTA0MDFmNmI3NjA3NjNlMjhiMDQ2ZjJjNSIsInZlcnNpb24iOjF9.DDm0LNTkdLbGsue58bg1aH_s67KfbcmkvL-6ZiI2s8IoxhHJMSf29H_uV2YLyevwx900t-MwTVOW3qfFnMMEAQ"}, {"type": "total", "value": 11869, "name": "total", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGFkMmI2ODM0NmY5NGNkNmUxYWViOWYxZDNkY2EzYWFmOWI4N2VhYzY5MGEzMTVhOTU4Zjc4YWViOGNjOWJjMCIsInZlcnNpb24iOjF9.fexrU1icJK5_MiifBtZWkeUvpmFISqBLDXSQJ8E6UnrRof-7cU0s4tX_dIsauHWtUpIHMPZCf5dlMWQKXZuAAA"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 85.289, "name": "Exact Match"}, {"type": "f1", "value": 91.841, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "adversarial_qa", "type": "adversarial_qa", "config": "adversarialQA", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 29.5, "name": "Exact Match"}, {"type": "f1", "value": 40.367, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_adversarial", "type": "squad_adversarial", "config": "AddOneSent", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 78.567, "name": "Exact Match"}, {"type": "f1", "value": 84.469, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts amazon", "type": "squadshifts", "config": "amazon", "split": "test"}, "metrics": [{"type": "exact_match", "value": 69.924, "name": "Exact Match"}, {"type": "f1", "value": 83.284, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts new_wiki", "type": "squadshifts", "config": "new_wiki", "split": "test"}, "metrics": [{"type": "exact_match", "value": 81.204, "name": "Exact Match"}, {"type": "f1", "value": 90.595, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts nyt", "type": "squadshifts", "config": "nyt", "split": "test"}, "metrics": [{"type": "exact_match", "value": 82.931, "name": "Exact Match"}, {"type": "f1", "value": 90.756, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts reddit", "type": "squadshifts", "config": "reddit", "split": "test"}, "metrics": [{"type": "exact_match", "value": 71.55, "name": "Exact Match"}, {"type": "f1", "value": 82.939, "name": "F1"}]}]}]}
|
deepset/roberta-base-squad2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #rust #safetensors #roberta #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us
|
# roberta-base for QA
This is the roberta-base model, fine-tuned using the SQuAD2.0 dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
Language model: roberta-base
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
Code: See an example QA pipeline on Haystack
Infrastructure: 4x Tesla v100
## Hyperparameters
## Using a distilled model instead
Please note that we have also released a distilled version of this model called deepset/tinyroberta-squad2. The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:
For a complete example of ''roberta-base-squad2'' being used for Question Answering, check out the Tutorials in Haystack Documentation
### In Transformers
## Performance
Evaluated on the SQuAD 2.0 dev set with the official eval script.
## Authors
Branden Chan: URL@URL
Timo Möller: timo.moeller@URL
Malte Pietsch: malte.pietsch@URL
Tanay Soni: URL@URL
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
</div>
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- Distilled roberta-base-squad2 (aka "tinyroberta-squad2")
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="URL repo and <strong><a href="URL">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="URL community open to everyone!</a></strong></p>
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# roberta-base for QA \n\nThis is the roberta-base model, fine-tuned using the SQuAD2.0 dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.",
"## Overview\nLanguage model: roberta-base \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See an example QA pipeline on Haystack \nInfrastructure: 4x Tesla v100",
"## Hyperparameters",
"## Using a distilled model instead\nPlease note that we have also released a distilled version of this model called deepset/tinyroberta-squad2. The distilled model has a comparable prediction quality and runs at twice the speed of the base model.",
"## Usage",
"### In Haystack\nHaystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:\n\nFor a complete example of ''roberta-base-squad2'' being used for Question Answering, check out the Tutorials in Haystack Documentation",
"### In Transformers",
"## Performance\nEvaluated on the SQuAD 2.0 dev set with the official eval script.",
"## Authors\nBranden Chan: URL@URL \nTimo Möller: timo.moeller@URL \nMalte Pietsch: malte.pietsch@URL \nTanay Soni: URL@URL",
"## About us\n\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #roberta #question-answering #en #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# roberta-base for QA \n\nThis is the roberta-base model, fine-tuned using the SQuAD2.0 dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.",
"## Overview\nLanguage model: roberta-base \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See an example QA pipeline on Haystack \nInfrastructure: 4x Tesla v100",
"## Hyperparameters",
"## Using a distilled model instead\nPlease note that we have also released a distilled version of this model called deepset/tinyroberta-squad2. The distilled model has a comparable prediction quality and runs at twice the speed of the base model.",
"## Usage",
"### In Haystack\nHaystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:\n\nFor a complete example of ''roberta-base-squad2'' being used for Question Answering, check out the Tutorials in Haystack Documentation",
"### In Transformers",
"## Performance\nEvaluated on the SQuAD 2.0 dev set with the official eval script.",
"## Authors\nBranden Chan: URL@URL \nTimo Möller: timo.moeller@URL \nMalte Pietsch: malte.pietsch@URL \nTanay Soni: URL@URL",
"## About us\n\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
# roberta-large for QA
This is the [roberta-large](https://huggingface.co/roberta-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
**Language model:** roberta-large
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
base_LM_model = "roberta-large"
```
## Using a distilled model instead
Please note that we have also released a distilled version of this model called [deepset/roberta-base-squad2-distilled](https://huggingface.co/deepset/roberta-base-squad2-distilled). The distilled model has a comparable prediction quality and runs at twice the speed of the large model.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-large-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-large-squad2",tokenizer="deepset/roberta-large-squad2")
```
For a complete example of ``roberta-large-squad2`` being used for Question Answering, check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-large-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
**Branden Chan:** branden.chan@deepset.ai
**Timo Möller:** timo.moeller@deepset.ai
**Malte Pietsch:** malte.pietsch@deepset.ai
**Tanay Soni:** tanay.soni@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "base_model": "roberta-large", "model-index": [{"name": "deepset/roberta-large-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 85.168, "name": "Exact Match"}, {"type": "f1", "value": 88.349, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 87.162, "name": "Exact Match"}, {"type": "f1", "value": 93.603, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "adversarial_qa", "type": "adversarial_qa", "config": "adversarialQA", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 35.9, "name": "Exact Match"}, {"type": "f1", "value": 48.923, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_adversarial", "type": "squad_adversarial", "config": "AddOneSent", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 81.142, "name": "Exact Match"}, {"type": "f1", "value": 87.099, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts amazon", "type": "squadshifts", "config": "amazon", "split": "test"}, "metrics": [{"type": "exact_match", "value": 72.453, "name": "Exact Match"}, {"type": "f1", "value": 86.325, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts new_wiki", "type": "squadshifts", "config": "new_wiki", "split": "test"}, "metrics": [{"type": "exact_match", "value": 82.338, "name": "Exact Match"}, {"type": "f1", "value": 91.974, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts nyt", "type": "squadshifts", "config": "nyt", "split": "test"}, "metrics": [{"type": "exact_match", "value": 84.352, "name": "Exact Match"}, {"type": "f1", "value": 92.645, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts reddit", "type": "squadshifts", "config": "reddit", "split": "test"}, "metrics": [{"type": "exact_match", "value": 74.722, "name": "Exact Match"}, {"type": "f1", "value": 86.86, "name": "F1"}]}]}]}
|
deepset/roberta-large-squad2
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"base_model:roberta-large",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #safetensors #roberta #question-answering #en #dataset-squad_v2 #base_model-roberta-large #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us
|
# roberta-large for QA
This is the roberta-large model, fine-tuned using the SQuAD2.0 dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
Language model: roberta-large
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
Code: See an example QA pipeline on Haystack
Infrastructure: 4x Tesla v100
## Hyperparameters
## Using a distilled model instead
Please note that we have also released a distilled version of this model called deepset/roberta-base-squad2-distilled. The distilled model has a comparable prediction quality and runs at twice the speed of the large model.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:
For a complete example of ''roberta-large-squad2'' being used for Question Answering, check out the Tutorials in Haystack Documentation
### In Transformers
## Authors
Branden Chan: URL@URL
Timo Möller: timo.moeller@URL
Malte Pietsch: malte.pietsch@URL
Tanay Soni: URL@URL
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
</div>
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- Distilled roberta-base-squad2 (aka "tinyroberta-squad2")
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="URL repo and <strong><a href="URL">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="URL community open to everyone!</a></strong></p>
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# roberta-large for QA \n\nThis is the roberta-large model, fine-tuned using the SQuAD2.0 dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.",
"## Overview\nLanguage model: roberta-large \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See an example QA pipeline on Haystack \nInfrastructure: 4x Tesla v100",
"## Hyperparameters",
"## Using a distilled model instead\nPlease note that we have also released a distilled version of this model called deepset/roberta-base-squad2-distilled. The distilled model has a comparable prediction quality and runs at twice the speed of the large model.",
"## Usage",
"### In Haystack\nHaystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:\n\nFor a complete example of ''roberta-large-squad2'' being used for Question Answering, check out the Tutorials in Haystack Documentation",
"### In Transformers",
"## Authors\nBranden Chan: URL@URL \nTimo Möller: timo.moeller@URL \nMalte Pietsch: malte.pietsch@URL \nTanay Soni: URL@URL",
"## About us\n\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #roberta #question-answering #en #dataset-squad_v2 #base_model-roberta-large #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# roberta-large for QA \n\nThis is the roberta-large model, fine-tuned using the SQuAD2.0 dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.",
"## Overview\nLanguage model: roberta-large \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See an example QA pipeline on Haystack \nInfrastructure: 4x Tesla v100",
"## Hyperparameters",
"## Using a distilled model instead\nPlease note that we have also released a distilled version of this model called deepset/roberta-base-squad2-distilled. The distilled model has a comparable prediction quality and runs at twice the speed of the large model.",
"## Usage",
"### In Haystack\nHaystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:\n\nFor a complete example of ''roberta-large-squad2'' being used for Question Answering, check out the Tutorials in Haystack Documentation",
"### In Transformers",
"## Authors\nBranden Chan: URL@URL \nTimo Möller: timo.moeller@URL \nMalte Pietsch: malte.pietsch@URL \nTanay Soni: URL@URL",
"## About us\n\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
null |
transformers
|
This is an upload of the bert-base-nli-stsb-mean-tokens pretrained model from the Sentence Transformers Repo (https://github.com/UKPLab/sentence-transformers)
|
{"license": "apache-2.0"}
|
deepset/sentence_bert
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
This is an upload of the bert-base-nli-stsb-mean-tokens pretrained model from the Sentence Transformers Repo (URL
|
[] |
[
"TAGS\n#transformers #pytorch #jax #bert #license-apache-2.0 #endpoints_compatible #has_space #region-us \n"
] |
null |
transformers
|
This model contains the converted PyTorch checkpoint of the original Tensorflow model available in the [TaPas repository](https://github.com/google-research/tapas/blob/master/DENSE_TABLE_RETRIEVER.md#reader-models).
It is described in Herzig et al.'s (2021) [paper](https://aclanthology.org/2021.naacl-main.43/) _Open Domain Question Answering over Tables via Dense Retrieval_.
This model has 2 versions that can be used differing only in the table scoring head.
The default one has an adapted table scoring head in order to be able to generate probabilities out of the logits.
The other (non-default) version corresponds to the original checkpoint from the TaPas repository and can be accessed by setting `revision="original"`.
# Usage
## In Haystack
If you want to use this model for question-answering over tables, you can load it in [Haystack](https://github.com/deepset-ai/haystack/):
```python
from haystack.nodes import TableReader
table_reader = TableReader(model_name_or_path="deepset/tapas-large-nq-hn-reader")
```
|
{"language": "en", "license": "apache-2.0", "tags": ["tapas"]}
|
deepset/tapas-large-nq-hn-reader
| null |
[
"transformers",
"pytorch",
"tapas",
"en",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tapas #en #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
This model contains the converted PyTorch checkpoint of the original Tensorflow model available in the TaPas repository.
It is described in Herzig et al.'s (2021) paper _Open Domain Question Answering over Tables via Dense Retrieval_.
This model has 2 versions that can be used differing only in the table scoring head.
The default one has an adapted table scoring head in order to be able to generate probabilities out of the logits.
The other (non-default) version corresponds to the original checkpoint from the TaPas repository and can be accessed by setting 'revision="original"'.
# Usage
## In Haystack
If you want to use this model for question-answering over tables, you can load it in Haystack:
|
[
"# Usage",
"## In Haystack\nIf you want to use this model for question-answering over tables, you can load it in Haystack:"
] |
[
"TAGS\n#transformers #pytorch #tapas #en #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# Usage",
"## In Haystack\nIf you want to use this model for question-answering over tables, you can load it in Haystack:"
] |
null |
transformers
|
This model contains the converted PyTorch checkpoint of the original Tensorflow model available in the [TaPas repository](https://github.com/google-research/tapas/blob/master/DENSE_TABLE_RETRIEVER.md#reader-models).
It is described in Herzig et al.'s (2021) [paper](https://aclanthology.org/2021.naacl-main.43/) _Open Domain Question Answering over Tables via Dense Retrieval_.
This model has 2 versions which can be used differing only in the table scoring head.
The default one has an adapted table scoring head in order to be able to generate probabilities out of the logits.
The other (non-default) version corredponds to the original checkpoint from the TaPas repository and can be accessed setting `revision="original"`.
# Usage
## In Haystack
If you want to use this model for question-answering over tables, you can load it in [Haystack](https://github.com/deepset-ai/haystack/):
```python
from haystack.nodes import TableReader
table_reader = TableReader(model_name_or_path="deepset/tapas-large-nq-reader")
```
|
{"language": "en", "license": "apache-2.0", "tags": ["tapas"]}
|
deepset/tapas-large-nq-reader
| null |
[
"transformers",
"pytorch",
"tapas",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tapas #en #license-apache-2.0 #endpoints_compatible #region-us
|
This model contains the converted PyTorch checkpoint of the original Tensorflow model available in the TaPas repository.
It is described in Herzig et al.'s (2021) paper _Open Domain Question Answering over Tables via Dense Retrieval_.
This model has 2 versions which can be used differing only in the table scoring head.
The default one has an adapted table scoring head in order to be able to generate probabilities out of the logits.
The other (non-default) version corredponds to the original checkpoint from the TaPas repository and can be accessed setting 'revision="original"'.
# Usage
## In Haystack
If you want to use this model for question-answering over tables, you can load it in Haystack:
|
[
"# Usage",
"## In Haystack\nIf you want to use this model for question-answering over tables, you can load it in Haystack:"
] |
[
"TAGS\n#transformers #pytorch #tapas #en #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Usage",
"## In Haystack\nIf you want to use this model for question-answering over tables, you can load it in Haystack:"
] |
question-answering
|
transformers
|
## Overview
**Language model:** deepset/tinybert-6L-768D-squad2
**Language:** English
**Training data:** SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation
**Eval data:** SQuAD 2.0 dev set
**Infrastructure**: 1x V100 GPU
**Published**: Dec 8th, 2021
## Details
- haystack's intermediate layer and prediction layer distillation features were used for training (based on [TinyBERT](https://arxiv.org/pdf/1909.10351.pdf)). deepset/bert-base-uncased-squad2 was used as the teacher model and huawei-noah/TinyBERT_General_6L_768D was used as the student model.
## Hyperparameters
### Intermediate layer distillation
```
batch_size = 26
n_epochs = 5
max_seq_len = 384
learning_rate = 5e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 1
```
### Prediction layer distillation
```
batch_size = 26
n_epochs = 5
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 1
distillation_loss_weight = 1.0
```
## Performance
```
"exact": 71.87736882001179
"f1": 76.36111895973675
```
## Authors
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
- Michel Bartels: `michel.bartels [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["squad_v2"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg", "model-index": [{"name": "deepset/tinybert-6l-768d-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 73.8248, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGFmZmFiN2E5ODZkOTkyMjQ1NTUzMmQwMjc0M2RlYzVlNmM4YTFlNzA4YzIwY2JkY2EyNDg2ZTY3OTdjZTVlZiIsInZlcnNpb24iOjF9.ZZ6c2OI3lzeNhuSWTh28j00zk-sPrqkTvdVBZv2wJc1D4YnR-xOj72haybT6MV_xeYqTg3-x9L8PsWSS20NaDw"}, {"type": "f1", "value": 77.1684, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzAxMDk1YzI5ZjA2N2ZmMzAxNjgxYzJiNzAzYmI1ZWU5ZDRmYWY3OWJmMjlmNDcyMGE0YWY5NjNhZTk4YWY5ZSIsInZlcnNpb24iOjF9.rF3raNGUSYv5D2xzWLZztD99vwDKvWb22LG32RomrDGP6XKTbCVqZzAw5UFw93jKb0VoLApbQQ-AOGxLj3U_Cg"}]}]}]}
|
deepset/tinybert-6l-768d-squad2
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"exbert",
"en",
"dataset:squad_v2",
"arxiv:1909.10351",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1909.10351"
] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #bert #question-answering #exbert #en #dataset-squad_v2 #arxiv-1909.10351 #license-mit #model-index #endpoints_compatible #region-us
|
## Overview
Language model: deepset/tinybert-6L-768D-squad2
Language: English
Training data: SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation
Eval data: SQuAD 2.0 dev set
Infrastructure: 1x V100 GPU
Published: Dec 8th, 2021
## Details
- haystack's intermediate layer and prediction layer distillation features were used for training (based on TinyBERT). deepset/bert-base-uncased-squad2 was used as the teacher model and huawei-noah/TinyBERT_General_6L_768D was used as the student model.
## Hyperparameters
### Intermediate layer distillation
### Prediction layer distillation
## Performance
## Authors
- Timo Möller: 'timo.moeller [at] URL'
- Julian Risch: 'URL [at] URL'
- Malte Pietsch: 'malte.pietsch [at] URL'
- Michel Bartels: 'michel.bartels [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"## Overview\nLanguage model: deepset/tinybert-6L-768D-squad2 \nLanguage: English \nTraining data: SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation \nEval data: SQuAD 2.0 dev set \nInfrastructure: 1x V100 GPU \nPublished: Dec 8th, 2021",
"## Details\n- haystack's intermediate layer and prediction layer distillation features were used for training (based on TinyBERT). deepset/bert-base-uncased-squad2 was used as the teacher model and huawei-noah/TinyBERT_General_6L_768D was used as the student model.",
"## Hyperparameters",
"### Intermediate layer distillation",
"### Prediction layer distillation",
"## Performance",
"## Authors\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Risch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'\n- Michel Bartels: 'michel.bartels [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #question-answering #exbert #en #dataset-squad_v2 #arxiv-1909.10351 #license-mit #model-index #endpoints_compatible #region-us \n",
"## Overview\nLanguage model: deepset/tinybert-6L-768D-squad2 \nLanguage: English \nTraining data: SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation \nEval data: SQuAD 2.0 dev set \nInfrastructure: 1x V100 GPU \nPublished: Dec 8th, 2021",
"## Details\n- haystack's intermediate layer and prediction layer distillation features were used for training (based on TinyBERT). deepset/bert-base-uncased-squad2 was used as the teacher model and huawei-noah/TinyBERT_General_6L_768D was used as the student model.",
"## Hyperparameters",
"### Intermediate layer distillation",
"### Prediction layer distillation",
"## Performance",
"## Authors\n- Timo Möller: 'timo.moeller [at] URL'\n- Julian Risch: 'URL [at] URL'\n- Malte Pietsch: 'malte.pietsch [at] URL'\n- Michel Bartels: 'michel.bartels [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
# tinyroberta-squad2
## Overview
**Language model:** tinyroberta-squad2
**Language:** English
**Training data:** The PILE
**Code:**
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 4
base_LM_model = "deepset/tinyroberta-squad2-step1"
max_seq_len = 384
learning_rate = 1e-4
lr_schedule = LinearWarmup
warmup_proportion = 0.2
teacher = "deepset/roberta-base"
```
## Distillation
This model was distilled using the TinyBERT approach described in [this paper](https://arxiv.org/pdf/1909.10351.pdf) and implemented in [haystack](https://github.com/deepset-ai/haystack).
We have performed intermediate layer distillation with roberta-base as the teacher which resulted in [deepset/tinyroberta-6l-768d](https://huggingface.co/deepset/tinyroberta-6l-768d).
This model has not been distilled for any specific task. If you are interested in using distillation to improve its performance on a downstream task, you can take advantage of haystack's new [distillation functionality](https://haystack.deepset.ai/guides/model-distillation). You can also check out [deepset/tinyroberta-squad2](https://huggingface.co/deepset/tinyroberta-squad2) for a model that is already distilled on an extractive QA downstream task.
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/tinyroberta-squad2"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/tinyroberta-squad2"
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/roberta-base-squad2",tokenizer="deepset/roberta-base-squad2")
```
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
Michel Bartels: `michel.bartels [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"]}
|
deepset/tinyroberta-6l-768d
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"arxiv:1909.10351",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1909.10351"
] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #question-answering #en #dataset-squad_v2 #arxiv-1909.10351 #license-cc-by-4.0 #endpoints_compatible #region-us
|
# tinyroberta-squad2
## Overview
Language model: tinyroberta-squad2
Language: English
Training data: The PILE
Code:
Infrastructure: 4x Tesla v100
## Hyperparameters
## Distillation
This model was distilled using the TinyBERT approach described in this paper and implemented in haystack.
We have performed intermediate layer distillation with roberta-base as the teacher which resulted in deepset/tinyroberta-6l-768d.
This model has not been distilled for any specific task. If you are interested in using distillation to improve its performance on a downstream task, you can take advantage of haystack's new distillation functionality. You can also check out deepset/tinyroberta-squad2 for a model that is already distilled on an extractive QA downstream task.
## Usage
### In Transformers
### In FARM
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:
## Authors
Branden Chan: 'URL [at] URL'
Timo Möller: 'timo.moeller [at] URL'
Malte Pietsch: 'malte.pietsch [at] URL'
Tanay Soni: 'URL [at] URL'
Michel Bartels: 'michel.bartels [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Slack | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# tinyroberta-squad2",
"## Overview\nLanguage model: tinyroberta-squad2 \nLanguage: English \nTraining data: The PILE \nCode: \nInfrastructure: 4x Tesla v100",
"## Hyperparameters",
"## Distillation\nThis model was distilled using the TinyBERT approach described in this paper and implemented in haystack.\nWe have performed intermediate layer distillation with roberta-base as the teacher which resulted in deepset/tinyroberta-6l-768d.\nThis model has not been distilled for any specific task. If you are interested in using distillation to improve its performance on a downstream task, you can take advantage of haystack's new distillation functionality. You can also check out deepset/tinyroberta-squad2 for a model that is already distilled on an extractive QA downstream task.",
"## Usage",
"### In Transformers",
"### In FARM",
"### In haystack\nFor doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:",
"## Authors\nBranden Chan: 'URL [at] URL'\nTimo Möller: 'timo.moeller [at] URL'\nMalte Pietsch: 'malte.pietsch [at] URL'\nTanay Soni: 'URL [at] URL'\nMichel Bartels: 'michel.bartels [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #question-answering #en #dataset-squad_v2 #arxiv-1909.10351 #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"# tinyroberta-squad2",
"## Overview\nLanguage model: tinyroberta-squad2 \nLanguage: English \nTraining data: The PILE \nCode: \nInfrastructure: 4x Tesla v100",
"## Hyperparameters",
"## Distillation\nThis model was distilled using the TinyBERT approach described in this paper and implemented in haystack.\nWe have performed intermediate layer distillation with roberta-base as the teacher which resulted in deepset/tinyroberta-6l-768d.\nThis model has not been distilled for any specific task. If you are interested in using distillation to improve its performance on a downstream task, you can take advantage of haystack's new distillation functionality. You can also check out deepset/tinyroberta-squad2 for a model that is already distilled on an extractive QA downstream task.",
"## Usage",
"### In Transformers",
"### In FARM",
"### In haystack\nFor doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:",
"## Authors\nBranden Chan: 'URL [at] URL'\nTimo Möller: 'timo.moeller [at] URL'\nMalte Pietsch: 'malte.pietsch [at] URL'\nTanay Soni: 'URL [at] URL'\nMichel Bartels: 'michel.bartels [at] URL'",
"## About us\n!deepset logo\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Slack | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
# tinyroberta-squad2
This is the *distilled* version of the [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) model. This model has a comparable prediction quality and runs at twice the speed of the base model.
## Overview
**Language model:** tinyroberta-squad2
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 4
base_LM_model = "deepset/tinyroberta-squad2-step1"
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride = 128
max_query_length = 64
distillation_loss_weight = 0.75
temperature = 1.5
teacher = "deepset/robert-large-squad2"
```
## Distillation
This model was distilled using the TinyBERT approach described in [this paper](https://arxiv.org/pdf/1909.10351.pdf) and implemented in [haystack](https://github.com/deepset-ai/haystack).
Firstly, we have performed intermediate layer distillation with roberta-base as the teacher which resulted in [deepset/tinyroberta-6l-768d](https://huggingface.co/deepset/tinyroberta-6l-768d).
Secondly, we have performed task-specific distillation with [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) as the teacher for further intermediate layer distillation on an augmented version of SQuADv2 and then with [deepset/roberta-large-squad2](https://huggingface.co/deepset/roberta-large-squad2) as the teacher for prediction layer distillation.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/tinyroberta-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/tinyroberta-squad2")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/tinyroberta-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 78.69114798281817,
"f1": 81.9198998536977,
"total": 11873,
"HasAns_exact": 76.19770580296895,
"HasAns_f1": 82.66446878592329,
"HasAns_total": 5928,
"NoAns_exact": 81.17746005046257,
"NoAns_f1": 81.17746005046257,
"NoAns_total": 5945
```
## Authors
**Branden Chan:** branden.chan@deepset.ai
**Timo Möller:** timo.moeller@deepset.ai
**Malte Pietsch:** malte.pietsch@deepset.ai
**Tanay Soni:** tanay.soni@deepset.ai
**Michel Bartels:** michel.bartels@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [roberta-base-squad2]([https://huggingface.co/deepset/roberta-base-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "en", "license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/tinyroberta-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 78.8627, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDNlZDU4ODAxMzY5NGFiMTMyZmQ1M2ZhZjMyODA1NmFlOGMxNzYxNTA4OGE5YTBkZWViZjBkNGQ2ZmMxZjVlMCIsInZlcnNpb24iOjF9.Wgu599r6TvgMLTrHlLMVAbUtKD_3b70iJ5QSeDQ-bRfUsVk6Sz9OsJCp47riHJVlmSYzcDj_z_3jTcUjCFFXBg"}, {"type": "f1", "value": 82.0355, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTFkMzEzMWNiZDRhMGZlODhkYzcwZTZiMDFjZDg2YjllZmUzYWM5NTgwNGQ2NGYyMDk2ZGQwN2JmMTE5NTc3YiIsInZlcnNpb24iOjF9.ChgaYpuRHd5WeDFjtiAHUyczxtoOD_M5WR8834jtbf7wXhdGOnZKdZ1KclmhoI5NuAGc1NptX-G0zQ5FTHEcBA"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad", "type": "squad", "config": "plain_text", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 83.86, "name": "Exact Match"}, {"type": "f1", "value": 90.752, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "adversarial_qa", "type": "adversarial_qa", "config": "adversarialQA", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 25.967, "name": "Exact Match"}, {"type": "f1", "value": 37.006, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_adversarial", "type": "squad_adversarial", "config": "AddOneSent", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 76.329, "name": "Exact Match"}, {"type": "f1", "value": 83.292, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts amazon", "type": "squadshifts", "config": "amazon", "split": "test"}, "metrics": [{"type": "exact_match", "value": 63.915, "name": "Exact Match"}, {"type": "f1", "value": 78.395, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts new_wiki", "type": "squadshifts", "config": "new_wiki", "split": "test"}, "metrics": [{"type": "exact_match", "value": 80.297, "name": "Exact Match"}, {"type": "f1", "value": 89.808, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts nyt", "type": "squadshifts", "config": "nyt", "split": "test"}, "metrics": [{"type": "exact_match", "value": 80.149, "name": "Exact Match"}, {"type": "f1", "value": 88.321, "name": "F1"}]}, {"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squadshifts reddit", "type": "squadshifts", "config": "reddit", "split": "test"}, "metrics": [{"type": "exact_match", "value": 66.959, "name": "Exact Match"}, {"type": "f1", "value": 79.3, "name": "F1"}]}]}]}
|
deepset/tinyroberta-squad2
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"arxiv:1909.10351",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1909.10351"
] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #question-answering #en #dataset-squad_v2 #arxiv-1909.10351 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us
|
# tinyroberta-squad2
This is the *distilled* version of the deepset/roberta-base-squad2 model. This model has a comparable prediction quality and runs at twice the speed of the base model.
## Overview
Language model: tinyroberta-squad2
Language: English
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
Code: See an example QA pipeline on Haystack
Infrastructure: 4x Tesla v100
## Hyperparameters
## Distillation
This model was distilled using the TinyBERT approach described in this paper and implemented in haystack.
Firstly, we have performed intermediate layer distillation with roberta-base as the teacher which resulted in deepset/tinyroberta-6l-768d.
Secondly, we have performed task-specific distillation with deepset/roberta-base-squad2 as the teacher for further intermediate layer distillation on an augmented version of SQuADv2 and then with deepset/roberta-large-squad2 as the teacher for prediction layer distillation.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:
### In Transformers
## Performance
Evaluated on the SQuAD 2.0 dev set with the official eval script.
## Authors
Branden Chan: URL@URL
Timo Möller: timo.moeller@URL
Malte Pietsch: malte.pietsch@URL
Tanay Soni: URL@URL
Michel Bartels: michel.bartels@URL
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
</div>
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- roberta-base-squad2
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="URL repo and <strong><a href="URL">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="URL community open to everyone!</a></strong></p>
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# tinyroberta-squad2\n\nThis is the *distilled* version of the deepset/roberta-base-squad2 model. This model has a comparable prediction quality and runs at twice the speed of the base model.",
"## Overview\nLanguage model: tinyroberta-squad2 \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See an example QA pipeline on Haystack \nInfrastructure: 4x Tesla v100",
"## Hyperparameters",
"## Distillation\nThis model was distilled using the TinyBERT approach described in this paper and implemented in haystack.\nFirstly, we have performed intermediate layer distillation with roberta-base as the teacher which resulted in deepset/tinyroberta-6l-768d.\nSecondly, we have performed task-specific distillation with deepset/roberta-base-squad2 as the teacher for further intermediate layer distillation on an augmented version of SQuADv2 and then with deepset/roberta-large-squad2 as the teacher for prediction layer distillation.",
"## Usage",
"### In Haystack\nHaystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:",
"### In Transformers",
"## Performance\nEvaluated on the SQuAD 2.0 dev set with the official eval script.",
"## Authors\nBranden Chan: URL@URL \nTimo Möller: timo.moeller@URL \nMalte Pietsch: malte.pietsch@URL \nTanay Soni: URL@URL \nMichel Bartels: michel.bartels@URL",
"## About us\n\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- roberta-base-squad2\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #question-answering #en #dataset-squad_v2 #arxiv-1909.10351 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# tinyroberta-squad2\n\nThis is the *distilled* version of the deepset/roberta-base-squad2 model. This model has a comparable prediction quality and runs at twice the speed of the base model.",
"## Overview\nLanguage model: tinyroberta-squad2 \nLanguage: English \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See an example QA pipeline on Haystack \nInfrastructure: 4x Tesla v100",
"## Hyperparameters",
"## Distillation\nThis model was distilled using the TinyBERT approach described in this paper and implemented in haystack.\nFirstly, we have performed intermediate layer distillation with roberta-base as the teacher which resulted in deepset/tinyroberta-6l-768d.\nSecondly, we have performed task-specific distillation with deepset/roberta-base-squad2 as the teacher for further intermediate layer distillation on an augmented version of SQuADv2 and then with deepset/roberta-large-squad2 as the teacher for prediction layer distillation.",
"## Usage",
"### In Haystack\nHaystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:",
"### In Transformers",
"## Performance\nEvaluated on the SQuAD 2.0 dev set with the official eval script.",
"## Authors\nBranden Chan: URL@URL \nTimo Möller: timo.moeller@URL \nMalte Pietsch: malte.pietsch@URL \nTanay Soni: URL@URL \nMichel Bartels: michel.bartels@URL",
"## About us\n\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- roberta-base-squad2\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
# deepset/xlm-roberta-base-squad2-distilled
- haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model.
## Overview
**Language model:** deepset/xlm-roberta-base-squad2-distilled
**Language:** Multilingual
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 1x Tesla v100
## Hyperparameters
```
batch_size = 56
n_epochs = 4
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 3
distillation_loss_weight = 0.75
```
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled")
# or
reader = TransformersReader(model_name_or_path="deepset/xlm-roberta-base-squad2-distilled",tokenizer="deepset/xlm-roberta-base-squad2-distilled")
```
For a complete example of ``deepset/xlm-roberta-base-squad2-distilled`` being used for [question answering], check out the [Tutorials in Haystack Documentation](https://haystack.deepset.ai/tutorials/first-qa-system)
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/xlm-roberta-base-squad2-distilled"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set
```
"exact": 74.06721131980123%
"f1": 76.39919553344667%
```
## Authors
**Timo Möller:** timo.moeller@deepset.ai
**Julian Risch:** julian.risch@deepset.ai
**Malte Pietsch:** malte.pietsch@deepset.ai
**Michel Bartels:** michel.bartels@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "multilingual", "license": "mit", "tags": ["exbert"], "datasets": ["squad_v2"], "thumbnail": "https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg"}
|
deepset/xlm-roberta-base-squad2-distilled
| null |
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"question-answering",
"exbert",
"multilingual",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #safetensors #xlm-roberta #question-answering #exbert #multilingual #dataset-squad_v2 #license-mit #endpoints_compatible #has_space #region-us
|
# deepset/xlm-roberta-base-squad2-distilled
- haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model.
## Overview
Language model: deepset/xlm-roberta-base-squad2-distilled
Language: Multilingual
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0
Code: See an example QA pipeline on Haystack
Infrastructure: 1x Tesla v100
## Hyperparameters
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:
For a complete example of ''deepset/xlm-roberta-base-squad2-distilled'' being used for [question answering], check out the Tutorials in Haystack Documentation
### In Transformers
## Performance
Evaluated on the SQuAD 2.0 dev set
## Authors
Timo Möller: timo.moeller@URL
Julian Risch: URL@URL
Malte Pietsch: malte.pietsch@URL
Michel Bartels: michel.bartels@URL
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
</div>
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- Distilled roberta-base-squad2 (aka "tinyroberta-squad2")
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="URL repo and <strong><a href="URL">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="URL community open to everyone!</a></strong></p>
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# deepset/xlm-roberta-base-squad2-distilled\n- haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model.",
"## Overview\nLanguage model: deepset/xlm-roberta-base-squad2-distilled \nLanguage: Multilingual \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See an example QA pipeline on Haystack \nInfrastructure: 1x Tesla v100",
"## Hyperparameters",
"## Usage",
"### In Haystack\nHaystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:\n\nFor a complete example of ''deepset/xlm-roberta-base-squad2-distilled'' being used for [question answering], check out the Tutorials in Haystack Documentation",
"### In Transformers",
"## Performance\nEvaluated on the SQuAD 2.0 dev set",
"## Authors\nTimo Möller: timo.moeller@URL \nJulian Risch: URL@URL \nMalte Pietsch: malte.pietsch@URL \nMichel Bartels: michel.bartels@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #question-answering #exbert #multilingual #dataset-squad_v2 #license-mit #endpoints_compatible #has_space #region-us \n",
"# deepset/xlm-roberta-base-squad2-distilled\n- haystack's distillation feature was used for training. deepset/xlm-roberta-large-squad2 was used as the teacher model.",
"## Overview\nLanguage model: deepset/xlm-roberta-base-squad2-distilled \nLanguage: Multilingual \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 \nCode: See an example QA pipeline on Haystack \nInfrastructure: 1x Tesla v100",
"## Hyperparameters",
"## Usage",
"### In Haystack\nHaystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in Haystack:\n\nFor a complete example of ''deepset/xlm-roberta-base-squad2-distilled'' being used for [question answering], check out the Tutorials in Haystack Documentation",
"### In Transformers",
"## Performance\nEvaluated on the SQuAD 2.0 dev set",
"## Authors\nTimo Möller: timo.moeller@URL \nJulian Risch: URL@URL \nMalte Pietsch: malte.pietsch@URL \nMichel Bartels: michel.bartels@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
# Multilingual XLM-RoBERTa base for QA on various languages
## Overview
**Language model:** xlm-roberta-base
**Language:** Multilingual
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0 dev set - German MLQA - German XQuAD
**Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 22*4
n_epochs = 2
max_seq_len=256,
doc_stride=128,
learning_rate=2e-5,
```
Corresponding experiment logs in mlflow: [link](https://public-mlflow.deepset.ai/#/experiments/2/runs/b25ec75e07614accb3f1ce03d43dbe08)
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 73.91560683904657
"f1": 77.14103746689592
```
Evaluated on German MLQA: test-context-de-question-de.json
"exact": 33.67279167589108
"f1": 44.34437105434842
"total": 4517
Evaluated on German XQuAD: xquad.de.json
"exact": 48.739495798319325
"f1": 62.552615701071495
"total": 1190
## Usage
### In Transformers
```python
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoTokenizer
model_name = "deepset/xlm-roberta-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/xlm-roberta-base-squad2"
# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/xlm-roberta-base-squad2")
# or
reader = TransformersReader(model="deepset/roberta-base-squad2",tokenizer="deepset/xlm-roberta-base-squad2")
```
## Authors
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"license": "cc-by-4.0", "datasets": ["squad_v2"], "model-index": [{"name": "deepset/xlm-roberta-base-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 74.0354, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWMxNWQ2ODJkNWIzZGQwOWI4OTZjYjU3ZDVjZGQzMjI5MzljNjliZTY4Mzk4YTk4OTMzZWYxZjUxYmZhYTBhZSIsInZlcnNpb24iOjF9.eEeFYYJ30BfJDd-JYfI1kjlxJrRF6OFtj2GnkTCOO4kqX31inFy8ptDWusVlLFsUphm4dNWfTKXC5e-gytLBDA"}, {"type": "f1", "value": 77.1833, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjg4MjNkOTA4Y2I5OGFlYTk1NWZjMWFlNjI5M2Y0NGZhMThhN2M4YmY2Y2RhZjcwYzU0MGNjN2RkZDljZmJmNiIsInZlcnNpb24iOjF9.TX42YMXpH4e0qu7cC4ARDlZWSkd55dwwyeyFXmOlXERNnEicDuFBCsy8WHLaqQCLUkzODJ22Hw4zhv81rwnlAQ"}]}]}]}
|
deepset/xlm-roberta-base-squad2
| null |
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"question-answering",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #xlm-roberta #question-answering #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us
|
# Multilingual XLM-RoBERTa base for QA on various languages
## Overview
Language model: xlm-roberta-base
Language: Multilingual
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD 2.0 dev set - German MLQA - German XQuAD
Code: See example in FARM
Infrastructure: 4x Tesla v100
## Hyperparameters
Corresponding experiment logs in mlflow: link
## Performance
Evaluated on the SQuAD 2.0 dev set with the official eval script.
Evaluated on German MLQA: URL
"exact": 33.67279167589108
"f1": 44.34437105434842
"total": 4517
Evaluated on German XQuAD: URL
"exact": 48.739495798319325
"f1": 62.552615701071495
"total": 1190
## Usage
### In Transformers
### In FARM
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:
## Authors
Branden Chan: 'URL [at] URL'
Timo Möller: 'timo.moeller [at] URL'
Malte Pietsch: 'malte.pietsch [at] URL'
Tanay Soni: 'URL [at] URL'
## About us
!deepset logo
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch:
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# Multilingual XLM-RoBERTa base for QA on various languages",
"## Overview\nLanguage model: xlm-roberta-base \nLanguage: Multilingual \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 dev set - German MLQA - German XQuAD \nCode: See example in FARM \nInfrastructure: 4x Tesla v100",
"## Hyperparameters\n\n \n\nCorresponding experiment logs in mlflow: link",
"## Performance\nEvaluated on the SQuAD 2.0 dev set with the official eval script.\n\n\nEvaluated on German MLQA: URL\n \"exact\": 33.67279167589108\n \"f1\": 44.34437105434842\n \"total\": 4517\n\nEvaluated on German XQuAD: URL\n\"exact\": 48.739495798319325\n \"f1\": 62.552615701071495\n \"total\": 1190",
"## Usage",
"### In Transformers",
"### In FARM",
"### In haystack\nFor doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:",
"## Authors\nBranden Chan: 'URL [at] URL'\nTimo Möller: 'timo.moeller [at] URL'\nMalte Pietsch: 'malte.pietsch [at] URL'\nTanay Soni: 'URL [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #question-answering #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Multilingual XLM-RoBERTa base for QA on various languages",
"## Overview\nLanguage model: xlm-roberta-base \nLanguage: Multilingual \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD 2.0 dev set - German MLQA - German XQuAD \nCode: See example in FARM \nInfrastructure: 4x Tesla v100",
"## Hyperparameters\n\n \n\nCorresponding experiment logs in mlflow: link",
"## Performance\nEvaluated on the SQuAD 2.0 dev set with the official eval script.\n\n\nEvaluated on German MLQA: URL\n \"exact\": 33.67279167589108\n \"f1\": 44.34437105434842\n \"total\": 4517\n\nEvaluated on German XQuAD: URL\n\"exact\": 48.739495798319325\n \"f1\": 62.552615701071495\n \"total\": 1190",
"## Usage",
"### In Transformers",
"### In FARM",
"### In haystack\nFor doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:",
"## Authors\nBranden Chan: 'URL [at] URL'\nTimo Möller: 'timo.moeller [at] URL'\nMalte Pietsch: 'malte.pietsch [at] URL'\nTanay Soni: 'URL [at] URL'",
"## About us\n!deepset logo\n\nWe bring NLP to the industry via open source! \nOur focus: Industry specific language models & large scale QA systems. \n \nSome of our work: \n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")\n- FARM\n- Haystack\n\nGet in touch:\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
question-answering
|
transformers
|
# Multilingual XLM-RoBERTa large for QA on various languages
## Overview
**Language model:** xlm-roberta-large
**Language:** Multilingual
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD dev set - German MLQA - German XQuAD
**Training run:** [MLFlow link](https://public-mlflow.deepset.ai/#/experiments/124/runs/3a540e3f3ecf4dd98eae8fc6d457ff20)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "xlm-roberta-large"
max_seq_len = 256
learning_rate = 1e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD 2.0 English dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.45759285774446,
"f1": 83.79259828925511,
"total": 11873,
"HasAns_exact": 71.96356275303644,
"HasAns_f1": 80.6460053117963,
"HasAns_total": 5928,
"NoAns_exact": 86.93019343986543,
"NoAns_f1": 86.93019343986543,
"NoAns_total": 5945
```
Evaluated on German [MLQA: test-context-de-question-de.json](https://github.com/facebookresearch/MLQA)
```
"exact": 49.34691166703564,
"f1": 66.15582561674236,
"total": 4517,
```
Evaluated on German [XQuAD: xquad.de.json](https://github.com/deepmind/xquad)
```
"exact": 61.51260504201681,
"f1": 78.80206098332569,
"total": 1190,
```
## Usage
### In Haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/xlm-roberta-large-squad2")
# or
reader = TransformersReader(model="deepset/xlm-roberta-large-squad2",tokenizer="deepset/xlm-roberta-large-squad2")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/xlm-roberta-large-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
**Branden Chan:** branden.chan@deepset.ai
**Timo Möller:** timo.moeller@deepset.ai
**Malte Pietsch:** malte.pietsch@deepset.ai
**Tanay Soni:** tanay.soni@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")]([https://huggingface.co/deepset/tinyroberta-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
{"language": "multilingual", "license": "cc-by-4.0", "tags": ["question-answering"], "datasets": ["squad_v2"], "model-index": [{"name": "deepset/xlm-roberta-large-squad2", "results": [{"task": {"type": "question-answering", "name": "Question Answering"}, "dataset": {"name": "squad_v2", "type": "squad_v2", "config": "squad_v2", "split": "validation"}, "metrics": [{"type": "exact_match", "value": 81.8281, "name": "Exact Match", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzVhZDE2NTg5NmUwOWRkMmI2MGUxYjFlZjIzNmMyNDQ2MDY2MDNhYzE0ZjY5YTkyY2U4ODc3ODFiZjQxZWQ2YSIsInZlcnNpb24iOjF9.f_rN3WPMAdv-OBPz0T7N7lOxYz9f1nEr_P-vwKhi3jNdRKp_JTy18MYR9eyJM2riKHC6_ge-8XwfyrUf51DSDA"}, {"type": "f1", "value": 84.8886, "name": "F1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGE5MWJmZGUxMGMwNWFhYzVhZjQwZGEwOWQ4N2Q2Yjg5NzdjNDFiNDhiYTQ1Y2E5ZWJkOTFhYmI1Y2Q2ZGYwOCIsInZlcnNpb24iOjF9.TIdH-tOx3kEMDs5wK1r6iwZqqSjNGlBrpawrsE917j1F3UFJVnQ7wJwaj0OIgmC4iw8OQeLZL56ucBcLApa-AQ"}]}]}]}
|
deepset/xlm-roberta-large-squad2
| null |
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"question-answering",
"multilingual",
"dataset:squad_v2",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #safetensors #xlm-roberta #question-answering #multilingual #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us
|
# Multilingual XLM-RoBERTa large for QA on various languages
## Overview
Language model: xlm-roberta-large
Language: Multilingual
Downstream-task: Extractive QA
Training data: SQuAD 2.0
Eval data: SQuAD dev set - German MLQA - German XQuAD
Training run: MLFlow link
Infrastructure: 4x Tesla v100
## Hyperparameters
## Performance
Evaluated on the SQuAD 2.0 English dev set with the official eval script.
Evaluated on German MLQA: URL
Evaluated on German XQuAD: URL
## Usage
### In Haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:
### In Transformers
## Authors
Branden Chan: URL@URL
Timo Möller: timo.moeller@URL
Malte Pietsch: malte.pietsch@URL
Tanay Soni: URL@URL
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="URL class="w-40"/>
</div>
</div>
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- Distilled roberta-base-squad2 (aka "tinyroberta-squad2")
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="URL repo and <strong><a href="URL">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="URL community open to everyone!</a></strong></p>
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
|
[
"# Multilingual XLM-RoBERTa large for QA on various languages",
"## Overview\nLanguage model: xlm-roberta-large \nLanguage: Multilingual \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD dev set - German MLQA - German XQuAD \nTraining run: MLFlow link \nInfrastructure: 4x Tesla v100",
"## Hyperparameters",
"## Performance\nEvaluated on the SQuAD 2.0 English dev set with the official eval script.\n\n\nEvaluated on German MLQA: URL\n\n\nEvaluated on German XQuAD: URL",
"## Usage",
"### In Haystack\nFor doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:",
"### In Transformers",
"## Authors\nBranden Chan: URL@URL \nTimo Möller: timo.moeller@URL \nMalte Pietsch: malte.pietsch@URL \nTanay Soni: URL@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
[
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #question-answering #multilingual #dataset-squad_v2 #license-cc-by-4.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Multilingual XLM-RoBERTa large for QA on various languages",
"## Overview\nLanguage model: xlm-roberta-large \nLanguage: Multilingual \nDownstream-task: Extractive QA \nTraining data: SQuAD 2.0 \nEval data: SQuAD dev set - German MLQA - German XQuAD \nTraining run: MLFlow link \nInfrastructure: 4x Tesla v100",
"## Hyperparameters",
"## Performance\nEvaluated on the SQuAD 2.0 English dev set with the official eval script.\n\n\nEvaluated on German MLQA: URL\n\n\nEvaluated on German XQuAD: URL",
"## Usage",
"### In Haystack\nFor doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in haystack:",
"### In Transformers",
"## Authors\nBranden Chan: URL@URL \nTimo Möller: timo.moeller@URL \nMalte Pietsch: malte.pietsch@URL \nTanay Soni: URL@URL",
"## About us\n<div class=\"grid lg:grid-cols-2 gap-x-4 gap-y-3\">\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n <div class=\"w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center\">\n <img alt=\"\" src=\"URL class=\"w-40\"/>\n </div>\n</div>\n\ndeepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.\n\n\nSome of our other work: \n- Distilled roberta-base-squad2 (aka \"tinyroberta-squad2\")\n- German BERT (aka \"bert-base-german-cased\")\n- GermanQuAD and GermanDPR datasets and models (aka \"gelectra-base-germanquad\", \"gbert-base-germandpr\")",
"## Get in touch and join the Haystack community\n\n<p>For more info on Haystack, visit our <strong><a href=\"URL repo and <strong><a href=\"URL\">Documentation</a></strong>. \n\nWe also have a <strong><a class=\"h-7\" href=\"URL community open to everyone!</a></strong></p>\n\nTwitter | LinkedIn | Discord | GitHub Discussions | Website\n\nBy the way: we're hiring!"
] |
fill-mask
|
transformers
|
deeqBERT-base
---
- model: bert-base
- vocab: bert-wordpiece, 35k
- version: latest
|
{"language": "ko", "datasets": ["kowiki", "news"]}
|
baikal-nlp/dbert
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"ko",
"dataset:kowiki",
"dataset:news",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #bert #fill-mask #ko #dataset-kowiki #dataset-news #autotrain_compatible #endpoints_compatible #region-us
|
deeqBERT-base
---
- model: bert-base
- vocab: bert-wordpiece, 35k
- version: latest
|
[] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #ko #dataset-kowiki #dataset-news #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null |
transformers
|
deeqBERT5
---
- model: bert-base
- vocab: deeqnlp 1.5, 50k
- version: latest/3.5
|
{}
|
baikal-nlp/dbert5
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #endpoints_compatible #region-us
|
deeqBERT5
---
- model: bert-base
- vocab: deeqnlp 1.5, 50k
- version: latest/3.5
|
[] |
[
"TAGS\n#transformers #pytorch #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
deeqELECTRA-base
---
- model: electra-base-generator
- vocab: bert-wordpiece, 35k
- version: beta, 1.71M
|
{"language": "ko", "datasets": ["kowiki", "news"]}
|
baikal-nlp/delectra-generator
| null |
[
"transformers",
"pytorch",
"electra",
"fill-mask",
"ko",
"dataset:kowiki",
"dataset:news",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #electra #fill-mask #ko #dataset-kowiki #dataset-news #autotrain_compatible #endpoints_compatible #region-us
|
deeqELECTRA-base
---
- model: electra-base-generator
- vocab: bert-wordpiece, 35k
- version: beta, 1.71M
|
[] |
[
"TAGS\n#transformers #pytorch #electra #fill-mask #ko #dataset-kowiki #dataset-news #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null |
transformers
|
deeqELECTRA-base
---
- model: electra-base-discriminator
- vocab: bert-wordpiece, 35k
- version: beta, 1.71M
|
{"language": "ko", "datasets": ["kowiki", "news"]}
|
baikal-nlp/delectra
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"ko",
"dataset:kowiki",
"dataset:news",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #electra #pretraining #ko #dataset-kowiki #dataset-news #endpoints_compatible #region-us
|
deeqELECTRA-base
---
- model: electra-base-discriminator
- vocab: bert-wordpiece, 35k
- version: beta, 1.71M
|
[] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #ko #dataset-kowiki #dataset-news #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-amazon-reviews
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": [], "model_index": [{"name": "distilgpt2-finetuned-amazon-reviews", "results": [{"task": {"name": "Causal Language Modeling", "type": "text-generation"}}]}]}
|
defex/distilgpt2-finetuned-amazon-reviews
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# distilgpt2-finetuned-amazon-reviews
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Datasets 1.9.0
- Tokenizers 0.10.3
|
[
"# distilgpt2-finetuned-amazon-reviews\n\nThis model was trained from scratch on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.9.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# distilgpt2-finetuned-amazon-reviews\n\nThis model was trained from scratch on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.8.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.9.0\n- Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
# german-qg-t5-drink600
This model is fine-tuned in question generation in German. The expected answer must be highlighted with <hl> token. It is based on [german-qg-t5-quad](https://huggingface.co/dehio/german-qg-t5-quad) and further pre-trained on drink related questions.
## Task example
#### Input
generate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung,
die sowohl <hl>im Sommer wie auch zu Silvester<hl> funktioniert.
#### Expected Question
Zu welchen Gelegenheiten passt der Monk Sour gut?
## Model description
The model is based on [german-qg-t5-quad](https://huggingface.co/dehio/german-qg-t5-quad), which was pre-trained on [GermanQUAD](https://www.deepset.ai/germanquad). We further pre-trained it on questions annotated on drink receipts from [Mixology](https://mixology.eu/) ("drink600").
We have not yet open sourced the dataset, since we do not own copyright on the source material.
## Training and evaluation data
The training script can be accessed [here](https://github.com/d-e-h-i-o/german-qg).
## Evaluation
It achieves a **BLEU-4 score of 29.80** on the drink600 test set (n=120) and **11.30** on the GermanQUAD test set.
Thus, fine-tuning on drink600 did not affect performance on GermanQuAD.
In comparison, *german-qg-t5-quad* achieves a BLEU-4 score of **10.76** on the drink600 test set.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"language": ["de"], "license": "mit", "tags": ["question generation"], "datasets": ["deepset/germanquad"], "widget": [{"text": "generate question: Der Monk Sour Drink ist ein somit eine aromatische \u00dcberraschung, die sowohl <hl>im Sommer wie auch zu Silvester<hl> funktioniert."}], "model-index": [{"name": "german-qg-t5-drink600", "results": []}]}
|
dehio/german-qg-t5-drink600
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"de",
"dataset:deepset/germanquad",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #question generation #de #dataset-deepset/germanquad #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# german-qg-t5-drink600
This model is fine-tuned in question generation in German. The expected answer must be highlighted with <hl> token. It is based on german-qg-t5-quad and further pre-trained on drink related questions.
## Task example
#### Input
generate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung,
die sowohl <hl>im Sommer wie auch zu Silvester<hl> funktioniert.
#### Expected Question
Zu welchen Gelegenheiten passt der Monk Sour gut?
## Model description
The model is based on german-qg-t5-quad, which was pre-trained on GermanQUAD. We further pre-trained it on questions annotated on drink receipts from Mixology ("drink600").
We have not yet open sourced the dataset, since we do not own copyright on the source material.
## Training and evaluation data
The training script can be accessed here.
## Evaluation
It achieves a BLEU-4 score of 29.80 on the drink600 test set (n=120) and 11.30 on the GermanQUAD test set.
Thus, fine-tuning on drink600 did not affect performance on GermanQuAD.
In comparison, *german-qg-t5-quad* achieves a BLEU-4 score of 10.76 on the drink600 test set.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
[
"# german-qg-t5-drink600\n\nThis model is fine-tuned in question generation in German. The expected answer must be highlighted with <hl> token. It is based on german-qg-t5-quad and further pre-trained on drink related questions.",
"## Task example",
"#### Input\n\ngenerate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung, \ndie sowohl <hl>im Sommer wie auch zu Silvester<hl> funktioniert.",
"#### Expected Question\nZu welchen Gelegenheiten passt der Monk Sour gut?",
"## Model description\n\nThe model is based on german-qg-t5-quad, which was pre-trained on GermanQUAD. We further pre-trained it on questions annotated on drink receipts from Mixology (\"drink600\"). \nWe have not yet open sourced the dataset, since we do not own copyright on the source material.",
"## Training and evaluation data\n\nThe training script can be accessed here.",
"## Evaluation\n\nIt achieves a BLEU-4 score of 29.80 on the drink600 test set (n=120) and 11.30 on the GermanQUAD test set. \nThus, fine-tuning on drink600 did not affect performance on GermanQuAD.\n\nIn comparison, *german-qg-t5-quad* achieves a BLEU-4 score of 10.76 on the drink600 test set.",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 100\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #question generation #de #dataset-deepset/germanquad #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# german-qg-t5-drink600\n\nThis model is fine-tuned in question generation in German. The expected answer must be highlighted with <hl> token. It is based on german-qg-t5-quad and further pre-trained on drink related questions.",
"## Task example",
"#### Input\n\ngenerate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung, \ndie sowohl <hl>im Sommer wie auch zu Silvester<hl> funktioniert.",
"#### Expected Question\nZu welchen Gelegenheiten passt der Monk Sour gut?",
"## Model description\n\nThe model is based on german-qg-t5-quad, which was pre-trained on GermanQUAD. We further pre-trained it on questions annotated on drink receipts from Mixology (\"drink600\"). \nWe have not yet open sourced the dataset, since we do not own copyright on the source material.",
"## Training and evaluation data\n\nThe training script can be accessed here.",
"## Evaluation\n\nIt achieves a BLEU-4 score of 29.80 on the drink600 test set (n=120) and 11.30 on the GermanQUAD test set. \nThus, fine-tuning on drink600 did not affect performance on GermanQuAD.\n\nIn comparison, *german-qg-t5-quad* achieves a BLEU-4 score of 10.76 on the drink600 test set.",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 100\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german-qg-t5-e2e-quad (Work in progress)
This model is a end-to-end question generation model in German. Given a text, it generates several questions about it. This model is a fine-tuned version of [valhalla/t5-base-e2e-qg](https://huggingface.co/valhalla/t5-base-e2e-qg) on the [GermanQuAD dataset from deepset](https://huggingface.co/datasets/deepset/germanquad).
## Model description
More information needed
## Training and evaluation data
Bleu_1: 0.196051
Bleu_2: 0.122380
Bleu_3: 0.079980
Bleu_4: 0.053672
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"language": ["de"], "license": "mit", "tags": ["question generation"], "datasets": ["deepset/germanquad"], "widget": [{"text": "Naturschutzwarte haben auf der ostfriesischen Insel Wangerooge zwei seltene Kurzschn\u00e4uzige Seepferdchen entdeckt. Die Tiere seien vergangene Woche bei einer sogenannten Sp\u00fclsaumkontrolle entdeckt worden, bei der die Str\u00e4nde eigentlich nach M\u00fcll und toten V\u00f6geln abgesucht w\u00fcrden, sagte der Gesch\u00e4ftsf\u00fchrer der zust\u00e4ndigen Naturschutz- und Forschungsgemeinschaft Mellumrat, Mathias Heckroth. Dabei seien den Natursch\u00fctzern am Nordstrand kurz hintereinander die beiden leblosen, nur wenige Zentimeter gro\u00dfen Tiere aufgefallen. Experten der Nationalparkverwaltung bestimmten beide Tiere als Kurzschn\u00e4uzige Seepferdchen (Hippocampus hippocampus)."}], "inference": {"parameters": {"max_length": 128}}, "model-index": [{"name": "german-qg-t5-e2e-quad", "results": []}]}
|
dehio/german-qg-t5-e2e-quad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"question generation",
"de",
"dataset:deepset/germanquad",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #question generation #de #dataset-deepset/germanquad #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# german-qg-t5-e2e-quad (Work in progress)
This model is a end-to-end question generation model in German. Given a text, it generates several questions about it. This model is a fine-tuned version of valhalla/t5-base-e2e-qg on the GermanQuAD dataset from deepset.
## Model description
More information needed
## Training and evaluation data
Bleu_1: 0.196051
Bleu_2: 0.122380
Bleu_3: 0.079980
Bleu_4: 0.053672
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
[
"# german-qg-t5-e2e-quad (Work in progress)\n\nThis model is a end-to-end question generation model in German. Given a text, it generates several questions about it. This model is a fine-tuned version of valhalla/t5-base-e2e-qg on the GermanQuAD dataset from deepset.",
"## Model description \n\nMore information needed",
"## Training and evaluation data\n\nBleu_1: 0.196051 \nBleu_2: 0.122380 \nBleu_3: 0.079980 \nBleu_4: 0.053672",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #question generation #de #dataset-deepset/germanquad #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# german-qg-t5-e2e-quad (Work in progress)\n\nThis model is a end-to-end question generation model in German. Given a text, it generates several questions about it. This model is a fine-tuned version of valhalla/t5-base-e2e-qg on the GermanQuAD dataset from deepset.",
"## Model description \n\nMore information needed",
"## Training and evaluation data\n\nBleu_1: 0.196051 \nBleu_2: 0.122380 \nBleu_3: 0.079980 \nBleu_4: 0.053672",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
# german-qg-t5-quad
This model is fine-tuned in question generation in German. The expected answer must be highlighted with a
<hl> token.
## Task example
#### Input
generate question: Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des <hl> britischen Common Laws <hl> sind, setzt sich das amerikanische Recht bedeutend davon ab. Dies rührt größtenteils von dem langen Zeitraum her, [...]
#### Expected output
Von welchem Gesetzt stammt das Amerikanische ab?
## Model description
This model is a fine-tuned version of [valhalla/t5-base-qg-hl](https://huggingface.co/valhalla/t5-base-qg-hl) on the [GermanQUAD](https://www.deepset.ai/germanquad) dataset.
## Training and evaluation data
The training script can be accessed [here](https://github.com/d-e-h-i-o/german-qg).
### Evaluation
The model achieves a BLEU-4 score of **11.30** on the GermanQuAD test set (n=2204).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"language": ["de"], "license": "mit", "tags": ["question generation"], "datasets": ["deepset/germanquad"], "widget": [{"text": "Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des <hl>britischen Common Laws<hl> sind, setzt sich das amerikanische Recht bedeutend davon ab."}], "model-index": [{"name": "german-qg-t5-quad", "results": []}]}
|
dehio/german-qg-t5-quad
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question generation",
"de",
"dataset:deepset/germanquad",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #question generation #de #dataset-deepset/germanquad #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# german-qg-t5-quad
This model is fine-tuned in question generation in German. The expected answer must be highlighted with a
<hl> token.
## Task example
#### Input
generate question: Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des <hl> britischen Common Laws <hl> sind, setzt sich das amerikanische Recht bedeutend davon ab. Dies rührt größtenteils von dem langen Zeitraum her, [...]
#### Expected output
Von welchem Gesetzt stammt das Amerikanische ab?
## Model description
This model is a fine-tuned version of valhalla/t5-base-qg-hl on the GermanQUAD dataset.
## Training and evaluation data
The training script can be accessed here.
### Evaluation
The model achieves a BLEU-4 score of 11.30 on the GermanQuAD test set (n=2204).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 100
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
[
"# german-qg-t5-quad\n\nThis model is fine-tuned in question generation in German. The expected answer must be highlighted with a\n<hl> token.",
"## Task example",
"#### Input\n\ngenerate question: Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des <hl> britischen Common Laws <hl> sind, setzt sich das amerikanische Recht bedeutend davon ab. Dies rührt größtenteils von dem langen Zeitraum her, [...]",
"#### Expected output\n\nVon welchem Gesetzt stammt das Amerikanische ab?",
"## Model description\n\nThis model is a fine-tuned version of valhalla/t5-base-qg-hl on the GermanQUAD dataset.",
"## Training and evaluation data\n\nThe training script can be accessed here.",
"### Evaluation\n\nThe model achieves a BLEU-4 score of 11.30 on the GermanQuAD test set (n=2204).",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 100\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #question generation #de #dataset-deepset/germanquad #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# german-qg-t5-quad\n\nThis model is fine-tuned in question generation in German. The expected answer must be highlighted with a\n<hl> token.",
"## Task example",
"#### Input\n\ngenerate question: Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des <hl> britischen Common Laws <hl> sind, setzt sich das amerikanische Recht bedeutend davon ab. Dies rührt größtenteils von dem langen Zeitraum her, [...]",
"#### Expected output\n\nVon welchem Gesetzt stammt das Amerikanische ab?",
"## Model description\n\nThis model is a fine-tuned version of valhalla/t5-base-qg-hl on the GermanQUAD dataset.",
"## Training and evaluation data\n\nThe training script can be accessed here.",
"### Evaluation\n\nThe model achieves a BLEU-4 score of 11.30 on the GermanQuAD test set (n=2204).",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 100\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10",
"### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.10.0+cu102\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
- Precision: 0.9251
- Recall: 0.9370
- F1: 0.9310
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2435 | 1.0 | 878 | 0.0685 | 0.9182 | 0.9221 | 0.9202 | 0.9816 |
| 0.0515 | 2.0 | 1756 | 0.0602 | 0.9212 | 0.9368 | 0.9289 | 0.9834 |
| 0.0301 | 3.0 | 2634 | 0.0602 | 0.9251 | 0.9370 | 0.9310 | 0.9839 |
### Framework versions
- Transformers 4.11.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.925115970841617, "name": "Precision"}, {"type": "recall", "value": 0.9370175634858485, "name": "Recall"}, {"type": "f1", "value": 0.9310287333963209, "name": "F1"}, {"type": "accuracy", "value": 0.9839388692074285, "name": "Accuracy"}]}]}]}
|
delpart/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0602
* Precision: 0.9251
* Recall: 0.9370
* F1: 0.9310
* Accuracy: 0.9839
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.11.2
* Pytorch 1.9.0+cu102
* Datasets 1.12.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.12.1\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
#DialoGPT medium based model of Dwight Schrute, trained with 10 context lines of history for 20 epochs.
|
{"tags": ["conversational"]}
|
delvan/DialoGPT-medium-DwightV1
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#DialoGPT medium based model of Dwight Schrute, trained with 10 context lines of history for 20 epochs.
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
feature-extraction
|
transformers
|
This is finetune version of [SimCSE: Simple Contrastive Learning of Sentence Embeddings](https://arxiv.org/abs/2104.08821)
, train unsupervised on 570K stroke sentences from : stroke books, quora medical, quora's stroke and human annotates.
### Extract sentence representation
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("demdecuong/stroke_simcse")
model = AutoModel.from_pretrained("demdecuong/stroke_simcse")
text = "What are disease related to red stroke's causes?"
inputs = tokenizer(text, return_tensors='pt')
outputs = model(**inputs)[1]
```
### Build up embedding for database
```
database = [
'What is the daily checklist for stroke returning home',
'What are some tips for stroke adapt new life',
'What should I consider when using nursing-home care'
]
embedding = torch.zeros((len(database),768))
for i in range(len(database)):
inputs = tokenizer(database[i], return_tensors="pt")
outputs = model(**inputs)[1]
embedding[i] = outputs
print(embedding.shape)
```
### Result
On our Poc testset , which contains pairs of matching question related to stroke from human-generated.
| Model | Top-1 Accuracy |
| ------------- | ------------- |
| SimCSE (supervised) | 75.83 |
| SimCSE (ours) | 76.66 |
|
{}
|
demdecuong/stroke_simcse
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.08821",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08821"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2104.08821 #endpoints_compatible #region-us
|
This is finetune version of SimCSE: Simple Contrastive Learning of Sentence Embeddings
, train unsupervised on 570K stroke sentences from : stroke books, quora medical, quora's stroke and human annotates.
### Extract sentence representation
### Build up embedding for database
### Result
On our Poc testset , which contains pairs of matching question related to stroke from human-generated.
|
[
"### Extract sentence representation",
"### Build up embedding for database",
"### Result\n\n\nOn our Poc testset , which contains pairs of matching question related to stroke from human-generated."
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2104.08821 #endpoints_compatible #region-us \n",
"### Extract sentence representation",
"### Build up embedding for database",
"### Result\n\n\nOn our Poc testset , which contains pairs of matching question related to stroke from human-generated."
] |
feature-extraction
|
transformers
|
This is finetune version of [SimCSE: Simple Contrastive Learning of Sentence Embeddings](https://arxiv.org/abs/2104.08821)
- Train supervised on 100K triplet samples samples related to stroke domain from : stroke books, quora medical, quora's stroke, quora's general and human annotates.
- Positive sentences are generated by paraphrasing and back-translate.
- Negative sentences are randomly selected in general domain.
### Extract sentence representation
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("demdecuong/stroke_sup_simcse")
model = AutoModel.from_pretrained("demdecuong/stroke_sup_simcse")
text = "What are disease related to red stroke's causes?"
inputs = tokenizer(text, return_tensors='pt')
outputs = model(**inputs)[1]
```
### Build up embedding for database
```
database = [
'What is the daily checklist for stroke returning home',
'What are some tips for stroke adapt new life',
'What should I consider when using nursing-home care'
]
embedding = torch.zeros((len(database),768))
for i in range(len(database)):
inputs = tokenizer(database[i], return_tensors="pt")
outputs = model(**inputs)[1]
embedding[i] = outputs
print(embedding.shape)
```
### Result
On our company's PoC project, the testset contains positive/negative pairs of matching question related to stroke from human-generation.
- SimCSE supervised + 100k : Train on 100K triplet samples contains : medical, stroke and general domain
- SimCSE supervised + 42k : Train on 42K triplet samples contains : medical, stroke domain
| Model | Top-1 Accuracy |
| ------------- | ------------- |
| SimCSE supervised (author) | 75.83 |
| SimCSE unsupervised (ours) | 76.66 |
| SimCSE supervised + 100k (ours) | 73.33 |
| SimCSE supervised + 42k (ours) | 75.83 |
|
{}
|
demdecuong/stroke_sup_simcse
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.08821",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.08821"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2104.08821 #endpoints_compatible #region-us
|
This is finetune version of SimCSE: Simple Contrastive Learning of Sentence Embeddings
* Train supervised on 100K triplet samples samples related to stroke domain from : stroke books, quora medical, quora's stroke, quora's general and human annotates.
* Positive sentences are generated by paraphrasing and back-translate.
* Negative sentences are randomly selected in general domain.
### Extract sentence representation
### Build up embedding for database
### Result
On our company's PoC project, the testset contains positive/negative pairs of matching question related to stroke from human-generation.
* SimCSE supervised + 100k : Train on 100K triplet samples contains : medical, stroke and general domain
* SimCSE supervised + 42k : Train on 42K triplet samples contains : medical, stroke domain
|
[
"### Extract sentence representation",
"### Build up embedding for database",
"### Result\n\n\nOn our company's PoC project, the testset contains positive/negative pairs of matching question related to stroke from human-generation.\n\n\n* SimCSE supervised + 100k : Train on 100K triplet samples contains : medical, stroke and general domain\n* SimCSE supervised + 42k : Train on 42K triplet samples contains : medical, stroke domain"
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2104.08821 #endpoints_compatible #region-us \n",
"### Extract sentence representation",
"### Build up embedding for database",
"### Result\n\n\nOn our company's PoC project, the testset contains positive/negative pairs of matching question related to stroke from human-generation.\n\n\n* SimCSE supervised + 100k : Train on 100K triplet samples contains : medical, stroke and general domain\n* SimCSE supervised + 42k : Train on 42K triplet samples contains : medical, stroke domain"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iloko_model
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0095
- Wer: 0.0840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2784 | 1.11 | 100 | 2.9875 | 1.0 |
| 2.6899 | 2.22 | 200 | 2.6741 | 1.0 |
| 2.6177 | 3.33 | 300 | 2.6516 | 1.0 |
| 2.5327 | 4.44 | 400 | 2.4530 | 1.0 |
| 0.8653 | 5.56 | 500 | 0.5227 | 0.6547 |
| 0.3414 | 6.67 | 600 | 0.1830 | 0.2487 |
| 0.2299 | 7.78 | 700 | 0.1212 | 0.1877 |
| 0.1739 | 8.89 | 800 | 0.0843 | 0.1441 |
| 0.1242 | 10.0 | 900 | 0.0766 | 0.1441 |
| 0.1116 | 11.11 | 1000 | 0.0530 | 0.1145 |
| 0.0861 | 12.22 | 1100 | 0.0442 | 0.1047 |
| 0.1007 | 13.33 | 1200 | 0.0379 | 0.1023 |
| 0.0613 | 14.44 | 1300 | 0.0291 | 0.1006 |
| 0.0629 | 15.56 | 1400 | 0.0264 | 0.0961 |
| 0.047 | 16.67 | 1500 | 0.0238 | 0.0935 |
| 0.0797 | 17.78 | 1600 | 0.0226 | 0.0913 |
| 0.034 | 18.89 | 1700 | 0.0197 | 0.0893 |
| 0.0485 | 20.0 | 1800 | 0.0173 | 0.0905 |
| 0.0402 | 21.11 | 1900 | 0.0148 | 0.0902 |
| 0.0231 | 22.22 | 2000 | 0.0135 | 0.0891 |
| 0.0512 | 23.33 | 2100 | 0.0134 | 0.0861 |
| 0.0181 | 24.44 | 2200 | 0.0118 | 0.0842 |
| 0.0371 | 25.56 | 2300 | 0.0116 | 0.0867 |
| 0.0342 | 26.67 | 2400 | 0.0104 | 0.0863 |
| 0.0344 | 27.78 | 2500 | 0.0100 | 0.0850 |
| 0.0182 | 28.89 | 2600 | 0.0096 | 0.0839 |
| 0.0171 | 30.0 | 2700 | 0.0095 | 0.0840 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "pipeline_tag": "automatic-speech-recognition"}
|
denden/iloko_model
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
iloko\_model
============
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0095
* Wer: 0.0840
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu102
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
FINETUNED ILOKANO SPEECH RECOGNITION FROM WAV2VEC-XLSR-S3
|
{"language": ["en"], "license": "afl-3.0", "tags": ["audio", "automatic-speech-recognition", "speech"], "datasets": ["timit_asr"], "metrics": ["wer"], "pipeline_tag": "automatic-speech-recognition"}
|
denden/new_iloko
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"en",
"dataset:timit_asr",
"license:afl-3.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #audio #speech #en #dataset-timit_asr #license-afl-3.0 #model-index #endpoints_compatible #region-us
|
FINETUNED ILOKANO SPEECH RECOGNITION FROM WAV2VEC-XLSR-S3
|
[] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #audio #speech #en #dataset-timit_asr #license-afl-3.0 #model-index #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
# BERT-Wiki-Paragraphs
Authors: Satya Almasian\*, Dennis Aumiller\*, Lucienne-Sophie Marmé, Michael Gertz
Contact us at `<lastname>@informatik.uni-heidelberg.de`
Details for the training method can be found in our work [Structural Text Segmentation of Legal Documents](https://arxiv.org/abs/2012.03619).
The training procedure follows the same setup, but we substitute legal documents for Wikipedia in this model.
Find the associated training data here: [wiki-paragraphs](https://huggingface.co/datasets/dennlinger/wiki-paragraphs)
Training is performed in a form of weakly-supervised fashion to determine whether paragraphs topically belong together or not.
We utilize automatically generated samples from Wikipedia for training, where paragraphs from within the same section are assumed to be topically coherent.
We use the same articles as ([Koshorek et al., 2018](https://arxiv.org/abs/1803.09337)),
albeit from a 2021 dump of Wikpeida, and split at paragraph boundaries instead of the sentence level.
## Usage
Preferred usage is through `transformers.pipeline`:
```python
from transformers import pipeline
pipe = pipeline("text-classification", model="dennlinger/bert-wiki-paragraphs")
pipe("{First paragraph} [SEP] {Second paragraph}")
```
A predicted "1" means that paragraphs belong to the same topic, a "0" indicates a disconnect.
## Training Setup
The model was trained for 3 epochs from `bert-base-uncased` on paragraph pairs (limited to 512 subwork with the `longest_first` truncation strategy).
We use a batch size of 24 wit 2 iterations gradient accumulation (effective batch size of 48), and a learning rate of 1e-4, with gradient clipping at 5.
Training was performed on a single Titan RTX GPU over the duration of 3 weeks.
|
{"language": ["en"], "license": "mit", "tags": ["sentence-similarity", "text-classification"], "datasets": ["dennlinger/wiki-paragraphs"], "metrics": ["f1"]}
|
dennlinger/bert-wiki-paragraphs
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sentence-similarity",
"en",
"dataset:dennlinger/wiki-paragraphs",
"arxiv:2012.03619",
"arxiv:1803.09337",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2012.03619",
"1803.09337"
] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #bert #text-classification #sentence-similarity #en #dataset-dennlinger/wiki-paragraphs #arxiv-2012.03619 #arxiv-1803.09337 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# BERT-Wiki-Paragraphs
Authors: Satya Almasian\*, Dennis Aumiller\*, Lucienne-Sophie Marmé, Michael Gertz
Contact us at '<lastname>@URL'
Details for the training method can be found in our work Structural Text Segmentation of Legal Documents.
The training procedure follows the same setup, but we substitute legal documents for Wikipedia in this model.
Find the associated training data here: wiki-paragraphs
Training is performed in a form of weakly-supervised fashion to determine whether paragraphs topically belong together or not.
We utilize automatically generated samples from Wikipedia for training, where paragraphs from within the same section are assumed to be topically coherent.
We use the same articles as (Koshorek et al., 2018),
albeit from a 2021 dump of Wikpeida, and split at paragraph boundaries instead of the sentence level.
## Usage
Preferred usage is through 'transformers.pipeline':
A predicted "1" means that paragraphs belong to the same topic, a "0" indicates a disconnect.
## Training Setup
The model was trained for 3 epochs from 'bert-base-uncased' on paragraph pairs (limited to 512 subwork with the 'longest_first' truncation strategy).
We use a batch size of 24 wit 2 iterations gradient accumulation (effective batch size of 48), and a learning rate of 1e-4, with gradient clipping at 5.
Training was performed on a single Titan RTX GPU over the duration of 3 weeks.
|
[
"# BERT-Wiki-Paragraphs\n\nAuthors: Satya Almasian\\*, Dennis Aumiller\\*, Lucienne-Sophie Marmé, Michael Gertz \nContact us at '<lastname>@URL' \nDetails for the training method can be found in our work Structural Text Segmentation of Legal Documents.\nThe training procedure follows the same setup, but we substitute legal documents for Wikipedia in this model.\nFind the associated training data here: wiki-paragraphs\n\nTraining is performed in a form of weakly-supervised fashion to determine whether paragraphs topically belong together or not.\nWe utilize automatically generated samples from Wikipedia for training, where paragraphs from within the same section are assumed to be topically coherent. \nWe use the same articles as (Koshorek et al., 2018), \nalbeit from a 2021 dump of Wikpeida, and split at paragraph boundaries instead of the sentence level.",
"## Usage\nPreferred usage is through 'transformers.pipeline':\n\n\nA predicted \"1\" means that paragraphs belong to the same topic, a \"0\" indicates a disconnect.",
"## Training Setup\nThe model was trained for 3 epochs from 'bert-base-uncased' on paragraph pairs (limited to 512 subwork with the 'longest_first' truncation strategy).\nWe use a batch size of 24 wit 2 iterations gradient accumulation (effective batch size of 48), and a learning rate of 1e-4, with gradient clipping at 5.\nTraining was performed on a single Titan RTX GPU over the duration of 3 weeks."
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #text-classification #sentence-similarity #en #dataset-dennlinger/wiki-paragraphs #arxiv-2012.03619 #arxiv-1803.09337 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT-Wiki-Paragraphs\n\nAuthors: Satya Almasian\\*, Dennis Aumiller\\*, Lucienne-Sophie Marmé, Michael Gertz \nContact us at '<lastname>@URL' \nDetails for the training method can be found in our work Structural Text Segmentation of Legal Documents.\nThe training procedure follows the same setup, but we substitute legal documents for Wikipedia in this model.\nFind the associated training data here: wiki-paragraphs\n\nTraining is performed in a form of weakly-supervised fashion to determine whether paragraphs topically belong together or not.\nWe utilize automatically generated samples from Wikipedia for training, where paragraphs from within the same section are assumed to be topically coherent. \nWe use the same articles as (Koshorek et al., 2018), \nalbeit from a 2021 dump of Wikpeida, and split at paragraph boundaries instead of the sentence level.",
"## Usage\nPreferred usage is through 'transformers.pipeline':\n\n\nA predicted \"1\" means that paragraphs belong to the same topic, a \"0\" indicates a disconnect.",
"## Training Setup\nThe model was trained for 3 epochs from 'bert-base-uncased' on paragraph pairs (limited to 512 subwork with the 'longest_first' truncation strategy).\nWe use a batch size of 24 wit 2 iterations gradient accumulation (effective batch size of 48), and a learning rate of 1e-4, with gradient clipping at 5.\nTraining was performed on a single Titan RTX GPU over the duration of 3 weeks."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.