pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text2text-generation
|
transformers
|
## CALM
This model is for ICLR2021 paper: [Pre-training Text-to-Text Transformers for Concept-centric Common Sense](https://openreview.net/forum?id=3k20LAiHYL2).
Checkout our [Project website](https://inklab.usc.edu/calm-project) for details!
```bibtex
@inproceedings{CALM2021,
title={Pre-training Text-to-Text Transformers for Concept-centric Common Sense},
author={Wangchunshu Zhou and Dong-Ho Lee and Ravi Kiran Selvam and Seyeon Lee and Bill Yuchen Lin and Xiang Ren},
booktitle={ICLR},
year={2021}
}
```
|
{}
|
danny911kr/calm-base
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## CALM
This model is for ICLR2021 paper: Pre-training Text-to-Text Transformers for Concept-centric Common Sense.
Checkout our Project website for details!
|
[
"## CALM\n\nThis model is for ICLR2021 paper: Pre-training Text-to-Text Transformers for Concept-centric Common Sense.\nCheckout our Project website for details!"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## CALM\n\nThis model is for ICLR2021 paper: Pre-training Text-to-Text Transformers for Concept-centric Common Sense.\nCheckout our Project website for details!"
] |
text2text-generation
|
transformers
|
## CALM
This model is for ICLR2021 paper: [Pre-training Text-to-Text Transformers for Concept-centric Common Sense](https://openreview.net/forum?id=3k20LAiHYL2).
Checkout our [Project website](https://inklab.usc.edu/calm-project) for details!
```bibtex
@inproceedings{CALM2021,
title={Pre-training Text-to-Text Transformers for Concept-centric Common Sense},
author={Wangchunshu Zhou and Dong-Ho Lee and Ravi Kiran Selvam and Seyeon Lee and Bill Yuchen Lin and Xiang Ren},
booktitle={ICLR},
year={2021}
}
```
|
{}
|
danny911kr/calm-large
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## CALM
This model is for ICLR2021 paper: Pre-training Text-to-Text Transformers for Concept-centric Common Sense.
Checkout our Project website for details!
|
[
"## CALM\n\nThis model is for ICLR2021 paper: Pre-training Text-to-Text Transformers for Concept-centric Common Sense.\nCheckout our Project website for details!"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## CALM\n\nThis model is for ICLR2021 paper: Pre-training Text-to-Text Transformers for Concept-centric Common Sense.\nCheckout our Project website for details!"
] |
text2text-generation
|
transformers
|
## CALM
This model is for ICLR2021 paper: [Pre-training Text-to-Text Transformers for Concept-centric Common Sense](https://openreview.net/forum?id=3k20LAiHYL2).
Checkout our [Project website](https://inklab.usc.edu/calm-project) for details!
```bibtex
@inproceedings{CALM2021,
title={Pre-training Text-to-Text Transformers for Concept-centric Common Sense},
author={Wangchunshu Zhou and Dong-Ho Lee and Ravi Kiran Selvam and Seyeon Lee and Bill Yuchen Lin and Xiang Ren},
booktitle={ICLR},
year={2021}
}
```
|
{}
|
danny911kr/calm-mix-base
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## CALM
This model is for ICLR2021 paper: Pre-training Text-to-Text Transformers for Concept-centric Common Sense.
Checkout our Project website for details!
|
[
"## CALM\n\nThis model is for ICLR2021 paper: Pre-training Text-to-Text Transformers for Concept-centric Common Sense.\nCheckout our Project website for details!"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## CALM\n\nThis model is for ICLR2021 paper: Pre-training Text-to-Text Transformers for Concept-centric Common Sense.\nCheckout our Project website for details!"
] |
text2text-generation
|
transformers
|
## CALM
This model is for ICLR2021 paper: [Pre-training Text-to-Text Transformers for Concept-centric Common Sense](https://openreview.net/forum?id=3k20LAiHYL2).
Checkout our [Project website](https://inklab.usc.edu/calm-project) for details!
```bibtex
@inproceedings{CALM2021,
title={Pre-training Text-to-Text Transformers for Concept-centric Common Sense},
author={Wangchunshu Zhou and Dong-Ho Lee and Ravi Kiran Selvam and Seyeon Lee and Bill Yuchen Lin and Xiang Ren},
booktitle={ICLR},
year={2021}
}
```
|
{}
|
danny911kr/calm-mix-large
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## CALM
This model is for ICLR2021 paper: Pre-training Text-to-Text Transformers for Concept-centric Common Sense.
Checkout our Project website for details!
|
[
"## CALM\n\nThis model is for ICLR2021 paper: Pre-training Text-to-Text Transformers for Concept-centric Common Sense.\nCheckout our Project website for details!"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## CALM\n\nThis model is for ICLR2021 paper: Pre-training Text-to-Text Transformers for Concept-centric Common Sense.\nCheckout our Project website for details!"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-or
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on odia using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "or", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-or")
model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-or")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the odia test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "or", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-or")
model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-or")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
\tpred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 54.6 %
## Training
The Common Voice `train`, `validation`, and test datasets were used for training as well as prediction and testing
The script used for training can be found [https://github.com/rahul-art/wav2vec2_or]
|
{"language": "or", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "odia XLSR Wav2Vec2 Large 2000", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice or", "type": "common_voice", "args": "or"}, "metrics": [{"type": "wer", "value": 54.6, "name": "Test WER"}]}]}]}
|
danurahul/wav2vec2-large-xlsr-or
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"or",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"or"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #or #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-or
Fine-tuned facebook/wav2vec2-large-xlsr-53 on odia using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the odia test data of Common Voice.
Test Result: 54.6 %
## Training
The Common Voice 'train', 'validation', and test datasets were used for training as well as prediction and testing
The script used for training can be found [URL
|
[
"# Wav2Vec2-Large-XLSR-53-or \nFine-tuned facebook/wav2vec2-large-xlsr-53 on odia using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the odia test data of Common Voice. \n\n\n\nTest Result: 54.6 %",
"## Training\n\nThe Common Voice 'train', 'validation', and test datasets were used for training as well as prediction and testing \n\nThe script used for training can be found [URL"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #or #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-or \nFine-tuned facebook/wav2vec2-large-xlsr-53 on odia using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the odia test data of Common Voice. \n\n\n\nTest Result: 54.6 %",
"## Training\n\nThe Common Voice 'train', 'validation', and test datasets were used for training as well as prediction and testing \n\nThe script used for training can be found [URL"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Punjabi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Punjabi using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pa-IN", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-pa-IN")
model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-pa-IN")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Punjabi test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pa-IN", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("danurahul/wav2vec2-large-xlsr-pa-IN")
model = Wav2Vec2ForCTC.from_pretrained("danurahul/wav2vec2-large-xlsr-pa-IN")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 100 %
## Training
The Common Voice `train`, `validation` was used for training as well as validation and testing #
The script used for training can be found https://github.com/rahul-art/huggingface_wav2vec2_punjabi/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Punjabi_ASR_with_%F0%9F%A4%97_Transformers.ipynb
|
{"language": "pa-IN", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "danurahul/wav2vec2-large-xlsr-pa-IN", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice pa-IN", "type": "common_voice", "args": "pa-IN"}, "metrics": [{"type": "wer", "value": 54.86, "name": "Test WER"}]}]}]}
|
danurahul/wav2vec2-large-xlsr-pa-IN
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pa-IN"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Punjabi
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Punjabi using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Punjabi test data of Common Voice.
Test Result: 100 %
## Training
The Common Voice 'train', 'validation' was used for training as well as validation and testing #
The script used for training can be found URL
|
[
"# Wav2Vec2-Large-XLSR-53-Punjabi\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Punjabi using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Punjabi test data of Common Voice. \n\n\n\n\nTest Result: 100 %",
"## Training\n\nThe Common Voice 'train', 'validation' was used for training as well as validation and testing #\n\nThe script used for training can be found URL"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Punjabi\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Punjabi using the Common Voice.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Punjabi test data of Common Voice. \n\n\n\n\nTest Result: 100 %",
"## Training\n\nThe Common Voice 'train', 'validation' was used for training as well as validation and testing #\n\nThe script used for training can be found URL"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9302
- Mae: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1253 | 1.0 | 235 | 0.9756 | 0.5488 |
| 0.9465 | 2.0 | 470 | 0.9302 | 0.5 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en", "results": []}]}
|
danwilbury/xlm-roberta-base-finetuned-marc-en
| null |
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
xlm-roberta-base-finetuned-marc-en
==================================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9302
* Mae: 0.5
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
Sample usage:
```python
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("danyaljj/gpt2_question_answering_squad2")
input_ids = tokenizer.encode("There are two apples on the counter. Q: How many apples? A:", return_tensors="pt")
outputs = model.generate(input_ids)
print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Which should produce this:
```
Generated: There are two apples on the counter. Q: How many apples? A: two
```
|
{}
|
danyaljj/gpt2_question_answering_squad2
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Sample usage:
Which should produce this:
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
Sample usage:
```python
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("danyaljj/gpt2_question_generation_given_paragraph")
input_ids = tokenizer.encode("There are two apples on the counter. Q:", return_tensors="pt")
outputs = model.generate(input_ids)
print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Which should produce this:
```
Generated: There are two apples on the counter. Q: What is the name of the counter that is on
```
|
{}
|
danyaljj/gpt2_question_generation_given_paragraph
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Sample usage:
Which should produce this:
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
Sample usage:
```python
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("danyaljj/gpt2_question_generation_given_paragraph_answer")
input_ids = tokenizer.encode("There are two apples on the counter. A: apples Q:", return_tensors="pt")
outputs = model.generate(input_ids)
print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Which should produce this:
```
Generated: There are two apples on the counter. A: apples Q: What is the name of the counter
```
|
{}
|
danyaljj/gpt2_question_generation_given_paragraph_answer
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Sample usage:
Which should produce this:
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null |
transformers
|
West et al.'s model from their "reflective decoding" paper.
Sample usage:
```python
import torch
from modeling_opengpt2 import OpenGPT2LMHeadModel
from padded_encoder import Encoder
path_to_backward = 'danyaljj/opengpt2_pytorch_backward'
encoder = Encoder()
model_backward = OpenGPT2LMHeadModel.from_pretrained(path_to_backward)
input = "until she finally won."
input_ids = encoder.encode(input)
input_ids = torch.tensor([input_ids[::-1] ], dtype=torch.int)
print(input_ids)
output = model_backward.generate(input_ids)
output_text = encoder.decode(output.tolist()[0][::-1])
print(output_text)
```
Download the additional files from here: https://github.com/peterwestuw/GPT2ForwardBackward
|
{}
|
danyaljj/opengpt2_pytorch_backward
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #endpoints_compatible #region-us
|
West et al.'s model from their "reflective decoding" paper.
Sample usage:
Download the additional files from here: URL
|
[] |
[
"TAGS\n#transformers #pytorch #endpoints_compatible #region-us \n"
] |
null |
transformers
|
West et al.'s model from their "reflective decoding" paper.
Sample usage:
```python
import torch
from modeling_opengpt2 import OpenGPT2LMHeadModel
from padded_encoder import Encoder
path_to_forward = 'danyaljj/opengpt2_pytorch_forward'
encoder = Encoder()
model_backward = OpenGPT2LMHeadModel.from_pretrained(path_to_forward)
input = "She tried to win but"
input_ids = encoder.encode(input)
input_ids = torch.tensor([input_ids ], dtype=torch.int)
print(input_ids)
output = model_backward.generate(input_ids)
output_text = encoder.decode(output.tolist()[0])
print(output_text)
```
Download the additional files from here: https://github.com/peterwestuw/GPT2ForwardBackward
|
{}
|
danyaljj/opengpt2_pytorch_forward
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #endpoints_compatible #region-us
|
West et al.'s model from their "reflective decoding" paper.
Sample usage:
Download the additional files from here: URL
|
[] |
[
"TAGS\n#transformers #pytorch #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilgpt2-finetuned-wikitext2", "results": []}]}
|
daqiao202/distilgpt2-finetuned-wikitext2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of distilgpt2 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
[
"# distilgpt2-finetuned-wikitext2\n\nThis model is a fine-tuned version of distilgpt2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.12.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# distilgpt2-finetuned-wikitext2\n\nThis model is a fine-tuned version of distilgpt2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Framework versions\n\n- Transformers 4.12.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.15.1\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
|
dark-knight/wav2vec2-base-timit-demo-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
[
"# wav2vec2-base-timit-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 2\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-base-timit-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 2\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Chicken Bot's Jon Snow DialoGPT Model
|
{"tags": ["conversational"]}
|
darkzek/chickenbot-jon-snow
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Chicken Bot's Jon Snow DialoGPT Model
|
[
"# Chicken Bot's Jon Snow DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Chicken Bot's Jon Snow DialoGPT Model"
] |
text-generation
|
transformers
|
# Pickle Rick DialoGPT Model
|
{"tags": ["conversational"]}
|
darthboii/DialoGPT-small-PickleRick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Pickle Rick DialoGPT Model
|
[
"# Pickle Rick DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Pickle Rick DialoGPT Model"
] |
text-generation
|
transformers
|
# Rick DialoGPT Model
|
{"tags": ["conversational"]}
|
darthboii/DialoGPT-small-Rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick DialoGPT Model
|
[
"# Rick DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick DialoGPT Model"
] |
null |
transformers
|
Hi
|
{}
|
darubramha/hi-LyricsGPT2
| null |
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #endpoints_compatible #region-us
|
Hi
|
[] |
[
"TAGS\n#transformers #pytorch #endpoints_compatible #region-us \n"
] |
null |
transformers
|
https://github.com/monologg/JointBERT
|
{}
|
databuzzword/JointBERT-atis
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #endpoints_compatible #region-us
|
URL
|
[] |
[
"TAGS\n#transformers #pytorch #bert #endpoints_compatible #region-us \n"
] |
null |
transformers
|
https://github.com/monologg/JointBERT
|
{}
|
databuzzword/JointBERT-snips
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #endpoints_compatible #region-us
|
URL
|
[] |
[
"TAGS\n#transformers #pytorch #bert #endpoints_compatible #region-us \n"
] |
text-to-speech
|
tensorflowtts
|
# Tacotron 2 with Guided Attention trained on Synpaflex (Fr)
This repository provides a pretrained [Tacotron2](https://arxiv.org/abs/1712.05884) trained with [Guided Attention](https://arxiv.org/abs/1710.08969) on Synpaflex dataset (Fr). For a detail of the model, we encourage you to read more about
[TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS).
## Install TensorFlowTTS
First of all, please install TensorFlowTTS with the following command:
```
pip install TensorFlowTTS
```
### Converting your Text to Mel Spectrogram
```python
import numpy as np
import soundfile as sf
import yaml
import tensorflow as tf
from tensorflow_tts.inference import AutoProcessor
from tensorflow_tts.inference import TFAutoModel
processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-synpaflex-fr")
tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-synpaflex-fr")
text = "Oh, je voudrais tant que tu te souviennes Des jours heureux quand nous étions amis"
input_ids = processor.text_to_sequence(text)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
)
```
#### Referencing Tacotron 2
```
@article{DBLP:journals/corr/abs-1712-05884,
author = {Jonathan Shen and
Ruoming Pang and
Ron J. Weiss and
Mike Schuster and
Navdeep Jaitly and
Zongheng Yang and
Zhifeng Chen and
Yu Zhang and
Yuxuan Wang and
R. J. Skerry{-}Ryan and
Rif A. Saurous and
Yannis Agiomyrgiannakis and
Yonghui Wu},
title = {Natural {TTS} Synthesis by Conditioning WaveNet on Mel Spectrogram
Predictions},
journal = {CoRR},
volume = {abs/1712.05884},
year = {2017},
url = {http://arxiv.org/abs/1712.05884},
archivePrefix = {arXiv},
eprint = {1712.05884},
timestamp = {Thu, 28 Nov 2019 08:59:52 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1712-05884.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
#### Referencing TensorFlowTTS
```
@misc{TFTTS,
author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata,
Trinh Le and Yunchao He},
title = {TensorflowTTS},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}},
}
```
|
{"language": "fr", "license": "apache-2.0", "tags": ["tensorflowtts", "audio", "text-to-speech", "text-to-mel"], "datasets": ["synpaflex"], "widget": [{"text": "Oh, je voudrais tant que tu te souviennes Des jours heureux quand nous \u00e9tions amis"}]}
|
dathudeptrai/tts-tacotron2-synpaflex-fr
| null |
[
"tensorflowtts",
"audio",
"text-to-speech",
"text-to-mel",
"fr",
"dataset:synpaflex",
"arxiv:1712.05884",
"arxiv:1710.08969",
"license:apache-2.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1712.05884",
"1710.08969"
] |
[
"fr"
] |
TAGS
#tensorflowtts #audio #text-to-speech #text-to-mel #fr #dataset-synpaflex #arxiv-1712.05884 #arxiv-1710.08969 #license-apache-2.0 #has_space #region-us
|
# Tacotron 2 with Guided Attention trained on Synpaflex (Fr)
This repository provides a pretrained Tacotron2 trained with Guided Attention on Synpaflex dataset (Fr). For a detail of the model, we encourage you to read more about
TensorFlowTTS.
## Install TensorFlowTTS
First of all, please install TensorFlowTTS with the following command:
### Converting your Text to Mel Spectrogram
#### Referencing Tacotron 2
#### Referencing TensorFlowTTS
|
[
"# Tacotron 2 with Guided Attention trained on Synpaflex (Fr)\nThis repository provides a pretrained Tacotron2 trained with Guided Attention on Synpaflex dataset (Fr). For a detail of the model, we encourage you to read more about\nTensorFlowTTS.",
"## Install TensorFlowTTS\nFirst of all, please install TensorFlowTTS with the following command:",
"### Converting your Text to Mel Spectrogram",
"#### Referencing Tacotron 2",
"#### Referencing TensorFlowTTS"
] |
[
"TAGS\n#tensorflowtts #audio #text-to-speech #text-to-mel #fr #dataset-synpaflex #arxiv-1712.05884 #arxiv-1710.08969 #license-apache-2.0 #has_space #region-us \n",
"# Tacotron 2 with Guided Attention trained on Synpaflex (Fr)\nThis repository provides a pretrained Tacotron2 trained with Guided Attention on Synpaflex dataset (Fr). For a detail of the model, we encourage you to read more about\nTensorFlowTTS.",
"## Install TensorFlowTTS\nFirst of all, please install TensorFlowTTS with the following command:",
"### Converting your Text to Mel Spectrogram",
"#### Referencing Tacotron 2",
"#### Referencing TensorFlowTTS"
] |
text-generation
|
transformers
|
La descripción en Español se encuentra después de la descripción en Inglés.
# (English) GPT2-small-spanish: a Language Model for Spanish text generation (and more NLP tasks...)
GPT2-small-spanish is a state-of-the-art language model for Spanish based on the GPT-2 small model.
It was trained on Spanish Wikipedia using **Transfer Learning and Fine-tuning techniques**. The training took around 70 hours with four GPU NVIDIA GTX 1080-Ti with 11GB of DDR5 and with around 3GB of (processed) training data.
It was fine-tuned from the [English pre-trained GPT-2 small](https://huggingface.co/gpt2) using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the [fastai v2](https://dev.fast.ai/) Deep Learning framework. All the fine-tuning fastai v2 techniques were used.
The training is purely based on the [GPorTuguese-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese) model developed by Pierre Guillou. The training details are in this article: "[Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787)".
This preliminary version is now available on Hugging Face.
## Limitations and bias
(Copied from original GPorTuguese-2 model)The training data used for this model come from Spanish Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Authors
The model was trained and evaluated by [Josué Obregon](https://www.linkedin.com/in/josue-obregon/) and [Berny Carrera](https://www.linkedin.com/in/bernycarrera/), founders of [Datificate](https://datificate.com), a space for learning Machine Learning in Spanish.
The training was possible thanks to the computing power of several GPUs (GPU NVIDIA GTX1080-Ti) of the [IAI Lab](http://iai.khu.ac.kr/) (Kyung Hee University) from which Josué is attached as a Postdoctoral Researcher in Industrial Artificial Intelligence.
As stated before, this work is mainly based in the work of [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/).
# (Español) GPT2-small-spanish: un modelo de lenguaje para generación de texto en Español (y algunas otras tareas de NLP...)
GPT2-small-spanish es un modelo de lenguaje de vanguardia en Español basado en el modelo pequeño GPT-2.
Fué entrenado con la Wikipedia en Español usando **técnicas de Aprendizaje por Transferencia y afinación de modelos**. El entrenamiento del modelo tomó alrededor 70 horas con cuatro GPUs NVIDIA GTX 1080-Ti con 11GB de DDR5 y con aproximadamente 3GB de datos de entrenamiento preprocesados.
Fue afinado del modelo en Inglés [English pre-trained GPT-2 small](https://huggingface.co/gpt2) utilizando las librerías de Hugging Face (Transformers y Tokenizers) integradas con el framework de Deep Learning [fastai v2](https://dev.fast.ai/). Se usaron técnicas de afinamiento fino de fastai v2.
El entrenamiento está enteramente basado en el modelo en Portugués [GPorTuguese-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese) desarrollado por Pierre Guillou. Los detalles del entrenamiento se encuentran en este articulo: "[Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787)".
La versión preliminar del modelo se encuentra en Hugging Face.
## Limitaciones y sesgos
(Copiado del modelo original GPorTuguese-2 model)Los datos de entrenamiento provienen de la Wikipedia en Español. Se sabe que contiene bastante contenido no filtrado del internet, lo cual está lejos de ser neutral. Esto es señalado por el equipo desarrollador de openAI en su propia tarjeta de modelo:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Autores
El modelo fue entreando y evaluado por [Josué Obregon](https://www.linkedin.com/in/josue-obregon/) y [Berny Carrera](https://www.linkedin.com/in/bernycarrera/), fundadores de [Datificate](https://datificate.com), un espacio para aprender Machine Learning en Español.
El entrenamiento fue posible gracias al poder computacional de varias GPUs (GPU NVIDIA GTX1080-Ti) del Laboratorio de Inteligencia Artificial Industrial [IAI Lab](http://iai.khu.ac.kr/) (Universidad de Kyung Hee) al cual Josué pertenece como investigador postdoctoral en Inteligencia Artificial Industrial.
Como fue mencionado anteriormente, este trabajo está basado en el trabajo de [Pierre GUILLOU](https://www.linkedin.com/in/pierreguillou/).
|
{"language": "es", "license": "apache-2.0", "datasets": ["wikipedia"], "widget": [{"text": "La inteligencia artificial en lationoam\u00e9rica se ha desarrollado "}]}
|
datificate/gpt2-small-spanish
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"es",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"es"
] |
TAGS
#transformers #pytorch #tf #jax #gpt2 #text-generation #es #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
La descripción en Español se encuentra después de la descripción en Inglés.
# (English) GPT2-small-spanish: a Language Model for Spanish text generation (and more NLP tasks...)
GPT2-small-spanish is a state-of-the-art language model for Spanish based on the GPT-2 small model.
It was trained on Spanish Wikipedia using Transfer Learning and Fine-tuning techniques. The training took around 70 hours with four GPU NVIDIA GTX 1080-Ti with 11GB of DDR5 and with around 3GB of (processed) training data.
It was fine-tuned from the English pre-trained GPT-2 small using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the fastai v2 Deep Learning framework. All the fine-tuning fastai v2 techniques were used.
The training is purely based on the GPorTuguese-2 model developed by Pierre Guillou. The training details are in this article: "Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)".
This preliminary version is now available on Hugging Face.
## Limitations and bias
(Copied from original GPorTuguese-2 model)The training data used for this model come from Spanish Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Authors
The model was trained and evaluated by Josué Obregon and Berny Carrera, founders of Datificate, a space for learning Machine Learning in Spanish.
The training was possible thanks to the computing power of several GPUs (GPU NVIDIA GTX1080-Ti) of the IAI Lab (Kyung Hee University) from which Josué is attached as a Postdoctoral Researcher in Industrial Artificial Intelligence.
As stated before, this work is mainly based in the work of Pierre GUILLOU.
# (Español) GPT2-small-spanish: un modelo de lenguaje para generación de texto en Español (y algunas otras tareas de NLP...)
GPT2-small-spanish es un modelo de lenguaje de vanguardia en Español basado en el modelo pequeño GPT-2.
Fué entrenado con la Wikipedia en Español usando técnicas de Aprendizaje por Transferencia y afinación de modelos. El entrenamiento del modelo tomó alrededor 70 horas con cuatro GPUs NVIDIA GTX 1080-Ti con 11GB de DDR5 y con aproximadamente 3GB de datos de entrenamiento preprocesados.
Fue afinado del modelo en Inglés English pre-trained GPT-2 small utilizando las librerías de Hugging Face (Transformers y Tokenizers) integradas con el framework de Deep Learning fastai v2. Se usaron técnicas de afinamiento fino de fastai v2.
El entrenamiento está enteramente basado en el modelo en Portugués GPorTuguese-2 desarrollado por Pierre Guillou. Los detalles del entrenamiento se encuentran en este articulo: "Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)".
La versión preliminar del modelo se encuentra en Hugging Face.
## Limitaciones y sesgos
(Copiado del modelo original GPorTuguese-2 model)Los datos de entrenamiento provienen de la Wikipedia en Español. Se sabe que contiene bastante contenido no filtrado del internet, lo cual está lejos de ser neutral. Esto es señalado por el equipo desarrollador de openAI en su propia tarjeta de modelo:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Autores
El modelo fue entreando y evaluado por Josué Obregon y Berny Carrera, fundadores de Datificate, un espacio para aprender Machine Learning en Español.
El entrenamiento fue posible gracias al poder computacional de varias GPUs (GPU NVIDIA GTX1080-Ti) del Laboratorio de Inteligencia Artificial Industrial IAI Lab (Universidad de Kyung Hee) al cual Josué pertenece como investigador postdoctoral en Inteligencia Artificial Industrial.
Como fue mencionado anteriormente, este trabajo está basado en el trabajo de Pierre GUILLOU.
|
[
"# (English) GPT2-small-spanish: a Language Model for Spanish text generation (and more NLP tasks...)\nGPT2-small-spanish is a state-of-the-art language model for Spanish based on the GPT-2 small model. \n\nIt was trained on Spanish Wikipedia using Transfer Learning and Fine-tuning techniques. The training took around 70 hours with four GPU NVIDIA GTX 1080-Ti with 11GB of DDR5 and with around 3GB of (processed) training data. \n\nIt was fine-tuned from the English pre-trained GPT-2 small using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the fastai v2 Deep Learning framework. All the fine-tuning fastai v2 techniques were used.\n\nThe training is purely based on the GPorTuguese-2 model developed by Pierre Guillou. The training details are in this article: \"Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)\".\n\nThis preliminary version is now available on Hugging Face.",
"## Limitations and bias\n\n(Copied from original GPorTuguese-2 model)The training data used for this model come from Spanish Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card:\n\n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.",
"## Authors\n\nThe model was trained and evaluated by Josué Obregon and Berny Carrera, founders of Datificate, a space for learning Machine Learning in Spanish.\nThe training was possible thanks to the computing power of several GPUs (GPU NVIDIA GTX1080-Ti) of the IAI Lab (Kyung Hee University) from which Josué is attached as a Postdoctoral Researcher in Industrial Artificial Intelligence.\n\nAs stated before, this work is mainly based in the work of Pierre GUILLOU.",
"# (Español) GPT2-small-spanish: un modelo de lenguaje para generación de texto en Español (y algunas otras tareas de NLP...)\n\nGPT2-small-spanish es un modelo de lenguaje de vanguardia en Español basado en el modelo pequeño GPT-2. \n\nFué entrenado con la Wikipedia en Español usando técnicas de Aprendizaje por Transferencia y afinación de modelos. El entrenamiento del modelo tomó alrededor 70 horas con cuatro GPUs NVIDIA GTX 1080-Ti con 11GB de DDR5 y con aproximadamente 3GB de datos de entrenamiento preprocesados. \n\nFue afinado del modelo en Inglés English pre-trained GPT-2 small utilizando las librerías de Hugging Face (Transformers y Tokenizers) integradas con el framework de Deep Learning fastai v2. Se usaron técnicas de afinamiento fino de fastai v2.\n\nEl entrenamiento está enteramente basado en el modelo en Portugués GPorTuguese-2 desarrollado por Pierre Guillou. Los detalles del entrenamiento se encuentran en este articulo: \"Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)\".\n\nLa versión preliminar del modelo se encuentra en Hugging Face.",
"## Limitaciones y sesgos\n\n(Copiado del modelo original GPorTuguese-2 model)Los datos de entrenamiento provienen de la Wikipedia en Español. Se sabe que contiene bastante contenido no filtrado del internet, lo cual está lejos de ser neutral. Esto es señalado por el equipo desarrollador de openAI en su propia tarjeta de modelo:\n\n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.",
"## Autores\n\nEl modelo fue entreando y evaluado por Josué Obregon y Berny Carrera, fundadores de Datificate, un espacio para aprender Machine Learning en Español.\n\nEl entrenamiento fue posible gracias al poder computacional de varias GPUs (GPU NVIDIA GTX1080-Ti) del Laboratorio de Inteligencia Artificial Industrial IAI Lab (Universidad de Kyung Hee) al cual Josué pertenece como investigador postdoctoral en Inteligencia Artificial Industrial.\n\nComo fue mencionado anteriormente, este trabajo está basado en el trabajo de Pierre GUILLOU."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #es #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# (English) GPT2-small-spanish: a Language Model for Spanish text generation (and more NLP tasks...)\nGPT2-small-spanish is a state-of-the-art language model for Spanish based on the GPT-2 small model. \n\nIt was trained on Spanish Wikipedia using Transfer Learning and Fine-tuning techniques. The training took around 70 hours with four GPU NVIDIA GTX 1080-Ti with 11GB of DDR5 and with around 3GB of (processed) training data. \n\nIt was fine-tuned from the English pre-trained GPT-2 small using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the fastai v2 Deep Learning framework. All the fine-tuning fastai v2 techniques were used.\n\nThe training is purely based on the GPorTuguese-2 model developed by Pierre Guillou. The training details are in this article: \"Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)\".\n\nThis preliminary version is now available on Hugging Face.",
"## Limitations and bias\n\n(Copied from original GPorTuguese-2 model)The training data used for this model come from Spanish Wikipedia. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card:\n\n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.",
"## Authors\n\nThe model was trained and evaluated by Josué Obregon and Berny Carrera, founders of Datificate, a space for learning Machine Learning in Spanish.\nThe training was possible thanks to the computing power of several GPUs (GPU NVIDIA GTX1080-Ti) of the IAI Lab (Kyung Hee University) from which Josué is attached as a Postdoctoral Researcher in Industrial Artificial Intelligence.\n\nAs stated before, this work is mainly based in the work of Pierre GUILLOU.",
"# (Español) GPT2-small-spanish: un modelo de lenguaje para generación de texto en Español (y algunas otras tareas de NLP...)\n\nGPT2-small-spanish es un modelo de lenguaje de vanguardia en Español basado en el modelo pequeño GPT-2. \n\nFué entrenado con la Wikipedia en Español usando técnicas de Aprendizaje por Transferencia y afinación de modelos. El entrenamiento del modelo tomó alrededor 70 horas con cuatro GPUs NVIDIA GTX 1080-Ti con 11GB de DDR5 y con aproximadamente 3GB de datos de entrenamiento preprocesados. \n\nFue afinado del modelo en Inglés English pre-trained GPT-2 small utilizando las librerías de Hugging Face (Transformers y Tokenizers) integradas con el framework de Deep Learning fastai v2. Se usaron técnicas de afinamiento fino de fastai v2.\n\nEl entrenamiento está enteramente basado en el modelo en Portugués GPorTuguese-2 desarrollado por Pierre Guillou. Los detalles del entrenamiento se encuentran en este articulo: \"Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)\".\n\nLa versión preliminar del modelo se encuentra en Hugging Face.",
"## Limitaciones y sesgos\n\n(Copiado del modelo original GPorTuguese-2 model)Los datos de entrenamiento provienen de la Wikipedia en Español. Se sabe que contiene bastante contenido no filtrado del internet, lo cual está lejos de ser neutral. Esto es señalado por el equipo desarrollador de openAI en su propia tarjeta de modelo:\n\n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.",
"## Autores\n\nEl modelo fue entreando y evaluado por Josué Obregon y Berny Carrera, fundadores de Datificate, un espacio para aprender Machine Learning en Español.\n\nEl entrenamiento fue posible gracias al poder computacional de varias GPUs (GPU NVIDIA GTX1080-Ti) del Laboratorio de Inteligencia Artificial Industrial IAI Lab (Universidad de Kyung Hee) al cual Josué pertenece como investigador postdoctoral en Inteligencia Artificial Industrial.\n\nComo fue mencionado anteriormente, este trabajo está basado en el trabajo de Pierre GUILLOU."
] |
fill-mask
|
transformers
|
# <a name="introduction"></a> PhoBERT: Pre-trained language models for Vietnamese
Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese ([Pho](https://en.wikipedia.org/wiki/Pho), i.e. "Phở", is a popular food in Vietnam):
- Two PhoBERT versions of "base" and "large" are the first public large-scale monolingual language models pre-trained for Vietnamese. PhoBERT pre-training approach is based on [RoBERTa](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.md) which optimizes the [BERT](https://github.com/google-research/bert) pre-training procedure for more robust performance.
- PhoBERT outperforms previous monolingual and multilingual approaches, obtaining new state-of-the-art performances on four downstream Vietnamese NLP tasks of Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference.
The general architecture and experimental results of PhoBERT can be found in our EMNLP-2020 Findings [paper](https://arxiv.org/abs/2003.00744):
@article{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
journal = {Findings of EMNLP},
year = {2020}
}
**Please CITE** our paper when PhoBERT is used to help produce published results or is incorporated into other software.
For further information or requests, please go to [PhoBERT's homepage](https://github.com/VinAIResearch/PhoBERT)!
### Installation <a name="install2"></a>
- Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
- Install `transformers`:
- `git clone https://github.com/huggingface/transformers.git`
- `cd transformers`
- `pip3 install --upgrade .`
### Pre-trained models <a name="models2"></a>
Model | #params | Arch. | Pre-training data
---|---|---|---
`vinai/phobert-base` | 135M | base | 20GB of texts
`vinai/phobert-large` | 370M | large | 20GB of texts
### Example usage <a name="usage2"></a>
```python
import torch
from transformers import AutoModel, AutoTokenizer
phobert = AutoModel.from_pretrained("vinai/phobert-base")
tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base")
# INPUT TEXT MUST BE ALREADY WORD-SEGMENTED!
line = "Tôi là sinh_viên trường đại_học Công_nghệ ."
input_ids = torch.tensor([tokenizer.encode(line)])
with torch.no_grad():
features = phobert(input_ids) # Models outputs are now tuples
## With TensorFlow 2.0+:
# from transformers import TFAutoModel
# phobert = TFAutoModel.from_pretrained("vinai/phobert-base")
```
|
{}
|
datnth1709/Phobert-classifier
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"arxiv:2003.00744",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2003.00744"
] |
[] |
TAGS
#transformers #pytorch #tf #jax #roberta #fill-mask #arxiv-2003.00744 #autotrain_compatible #endpoints_compatible #region-us
|
PhoBERT: Pre-trained language models for Vietnamese
====================================================
Pre-trained PhoBERT models are the state-of-the-art language models for Vietnamese (Pho, i.e. "Phở", is a popular food in Vietnam):
* Two PhoBERT versions of "base" and "large" are the first public large-scale monolingual language models pre-trained for Vietnamese. PhoBERT pre-training approach is based on RoBERTa which optimizes the BERT pre-training procedure for more robust performance.
* PhoBERT outperforms previous monolingual and multilingual approaches, obtaining new state-of-the-art performances on four downstream Vietnamese NLP tasks of Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference.
The general architecture and experimental results of PhoBERT can be found in our EMNLP-2020 Findings paper:
```
@article{phobert,
title = {{PhoBERT: Pre-trained language models for Vietnamese}},
author = {Dat Quoc Nguyen and Anh Tuan Nguyen},
journal = {Findings of EMNLP},
year = {2020}
}
```
Please CITE our paper when PhoBERT is used to help produce published results or is incorporated into other software.
For further information or requests, please go to PhoBERT's homepage!
### Installation
* Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)
* Install 'transformers':
- 'git clone URL
- 'cd transformers'
- 'pip3 install --upgrade .'
### Pre-trained models
### Example usage
|
[
"### Installation\n\n\n* Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)\n* Install 'transformers':\n- 'git clone URL\n- 'cd transformers'\n- 'pip3 install --upgrade .'",
"### Pre-trained models",
"### Example usage"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #fill-mask #arxiv-2003.00744 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Installation\n\n\n* Python 3.6+, and PyTorch 1.1.0+ (or TensorFlow 2.0+)\n* Install 'transformers':\n- 'git clone URL\n- 'cd transformers'\n- 'pip3 install --upgrade .'",
"### Pre-trained models",
"### Example usage"
] |
text-generation
|
transformers
|
#Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
dats/DialoGPT-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Harry Potter DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Tony Stark DialoGPT model
Invite me to your discord server : https://discord.com/api/oauth2/authorize?client_id=885065886787063848&permissions=137439365184&scope=bot
|
{"tags": ["conversational"]}
|
dattam/DialoGPT-medium-TonyStarkBot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Tony Stark DialoGPT model
Invite me to your discord server : URL
|
[
"# Tony Stark DialoGPT model\n\nInvite me to your discord server : URL"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Tony Stark DialoGPT model\n\nInvite me to your discord server : URL"
] |
token-classification
|
transformers
|
BioBERT model fine-tuned in NER task with BC5CDR-diseases and NCBI-diseases corpus along with selected pubtator annotations from LitCOVID dataset
This was fine-tuned in order to use it in a datummd/bionlp system which is available at: https://github.com/datummd/bionlp
|
{"language": ["en"], "license": "apache-2.0", "tags": ["BioBERT", "Diseases", "NER"], "datasets": ["ncbi_disease", "BC5CDR-diseases", "LitCOVID-pubtator"]}
|
datummd/NCBI_BC5CDR_disease
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"BioBERT",
"Diseases",
"NER",
"en",
"dataset:ncbi_disease",
"dataset:BC5CDR-diseases",
"dataset:LitCOVID-pubtator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #token-classification #BioBERT #Diseases #NER #en #dataset-ncbi_disease #dataset-BC5CDR-diseases #dataset-LitCOVID-pubtator #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
BioBERT model fine-tuned in NER task with BC5CDR-diseases and NCBI-diseases corpus along with selected pubtator annotations from LitCOVID dataset
This was fine-tuned in order to use it in a datummd/bionlp system which is available at: URL
|
[] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #BioBERT #Diseases #NER #en #dataset-ncbi_disease #dataset-BC5CDR-diseases #dataset-LitCOVID-pubtator #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification
|
fastai
|
## Model description
This model is intended to predict, from the title of a book, whether it is 'fiction' or 'non-fiction'.
This model was trained on data created from the Digitised printed books (18th-19th Century) book collection. The datasets in this collection are comprised and derived from 49,455 digitised books (65,227 volumes), mainly from the 19th Century. This dataset is dominated by English language books and includes books in several other languages in much smaller numbers.
This model was originally developed for use as part of the Living with Machines project to be able to 'segment' this large dataset of books into different categories based on a 'crude' classification of genre i.e. whether the title was `fiction` or `non-fiction`.
The model's training data (discussed more below) primarily consists of 19th Century book titles from the British Library Digitised printed books (18th-19th century) collection. These books have been catalogued according to British Library cataloguing practices. The model is likely to perform worse on any book titles from earlier or later periods. While the model is multilingual, it has training data in non-English book titles; these appear much less frequently.
## How to use
To use this within fastai, first [install](https://docs.fast.ai/#Installing) version 2 of the fastai library. You can load directly from the Hugging Face hub using the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) library.
```python
from fastai import load_learner
from huggingface_hub import hf_hub_download
learn = load_learner(
hf_hub_download('davanstrien/bl-books-genre-fastai', filename="model.pkl")
)
learn.predict("Oliver Twist")
```
## Limitations and bias
The model was developed based on data from the British Library's Digitised printed books (18th-19th Century) collection. This dataset is not representative of books from the period covered with biases towards certain types (travel) and a likely absence of books that were difficult to digitise.
The formatting of the British Library books corpus titles may differ from other collections, resulting in worse performance on other collections. It is recommended to evaluate the performance of the model before applying it to your own data. Likely, this model won't perform well for contemporary book titles without further fine-tuning.
## Training data
The training data was created using the Zooniverse platform. British Library cataloguers carried out the majority of the annotations used as training data. More information on the process of creating the training data will be available soon.
### Training procedure
Model training was carried out using the fastai library version 2.5.2.
The notebook using for training the model is available at: https://github.com/Living-with-machines/genre-classification
## Eval result
The model was evaluated on a held out test set:
```
precision recall f1-score support
Fiction 0.91 0.88 0.90 296
Non-fiction 0.94 0.95 0.95 554
accuracy 0.93 850
macro avg 0.93 0.92 0.92 850
weighted avg 0.93 0.93 0.93 850
```
|
{"library_name": "fastai", "tags": ["text-classification", "fastai"], "datasets": ["blbooksgenre"], "widget": [{"text": "Poems on various subjects. Whereto is prefixed a short essay on the structure of English verse"}, {"text": "Two Centuries of Soho: its institutions, firms, and amusements. By the Clergy of St. Anne's, Soho, J. H. Cardwell ... H. B. Freeman ... G. C. Wilton ... assisted by other contributors, etc"}, {"text": "The Adventures of Oliver Twist. [With plates.]"}]}
|
TheBritishLibrary/bl-books-genre-fastai
| null |
[
"fastai",
"text-classification",
"dataset:blbooksgenre",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#fastai #text-classification #dataset-blbooksgenre #region-us
|
## Model description
This model is intended to predict, from the title of a book, whether it is 'fiction' or 'non-fiction'.
This model was trained on data created from the Digitised printed books (18th-19th Century) book collection. The datasets in this collection are comprised and derived from 49,455 digitised books (65,227 volumes), mainly from the 19th Century. This dataset is dominated by English language books and includes books in several other languages in much smaller numbers.
This model was originally developed for use as part of the Living with Machines project to be able to 'segment' this large dataset of books into different categories based on a 'crude' classification of genre i.e. whether the title was 'fiction' or 'non-fiction'.
The model's training data (discussed more below) primarily consists of 19th Century book titles from the British Library Digitised printed books (18th-19th century) collection. These books have been catalogued according to British Library cataloguing practices. The model is likely to perform worse on any book titles from earlier or later periods. While the model is multilingual, it has training data in non-English book titles; these appear much less frequently.
## How to use
To use this within fastai, first install version 2 of the fastai library. You can load directly from the Hugging Face hub using the 'huggingface_hub' library.
## Limitations and bias
The model was developed based on data from the British Library's Digitised printed books (18th-19th Century) collection. This dataset is not representative of books from the period covered with biases towards certain types (travel) and a likely absence of books that were difficult to digitise.
The formatting of the British Library books corpus titles may differ from other collections, resulting in worse performance on other collections. It is recommended to evaluate the performance of the model before applying it to your own data. Likely, this model won't perform well for contemporary book titles without further fine-tuning.
## Training data
The training data was created using the Zooniverse platform. British Library cataloguers carried out the majority of the annotations used as training data. More information on the process of creating the training data will be available soon.
### Training procedure
Model training was carried out using the fastai library version 2.5.2.
The notebook using for training the model is available at: URL
## Eval result
The model was evaluated on a held out test set:
|
[
"## Model description\n\nThis model is intended to predict, from the title of a book, whether it is 'fiction' or 'non-fiction'.\n\nThis model was trained on data created from the Digitised printed books (18th-19th Century) book collection. The datasets in this collection are comprised and derived from 49,455 digitised books (65,227 volumes), mainly from the 19th Century. This dataset is dominated by English language books and includes books in several other languages in much smaller numbers. \n\nThis model was originally developed for use as part of the Living with Machines project to be able to 'segment' this large dataset of books into different categories based on a 'crude' classification of genre i.e. whether the title was 'fiction' or 'non-fiction'.\n\nThe model's training data (discussed more below) primarily consists of 19th Century book titles from the British Library Digitised printed books (18th-19th century) collection. These books have been catalogued according to British Library cataloguing practices. The model is likely to perform worse on any book titles from earlier or later periods. While the model is multilingual, it has training data in non-English book titles; these appear much less frequently.",
"## How to use\n\nTo use this within fastai, first install version 2 of the fastai library. You can load directly from the Hugging Face hub using the 'huggingface_hub' library.",
"## Limitations and bias\n\nThe model was developed based on data from the British Library's Digitised printed books (18th-19th Century) collection. This dataset is not representative of books from the period covered with biases towards certain types (travel) and a likely absence of books that were difficult to digitise.\n\nThe formatting of the British Library books corpus titles may differ from other collections, resulting in worse performance on other collections. It is recommended to evaluate the performance of the model before applying it to your own data. Likely, this model won't perform well for contemporary book titles without further fine-tuning.",
"## Training data\n\nThe training data was created using the Zooniverse platform. British Library cataloguers carried out the majority of the annotations used as training data. More information on the process of creating the training data will be available soon.",
"### Training procedure\n\nModel training was carried out using the fastai library version 2.5.2. \n\nThe notebook using for training the model is available at: URL",
"## Eval result\n\nThe model was evaluated on a held out test set:"
] |
[
"TAGS\n#fastai #text-classification #dataset-blbooksgenre #region-us \n",
"## Model description\n\nThis model is intended to predict, from the title of a book, whether it is 'fiction' or 'non-fiction'.\n\nThis model was trained on data created from the Digitised printed books (18th-19th Century) book collection. The datasets in this collection are comprised and derived from 49,455 digitised books (65,227 volumes), mainly from the 19th Century. This dataset is dominated by English language books and includes books in several other languages in much smaller numbers. \n\nThis model was originally developed for use as part of the Living with Machines project to be able to 'segment' this large dataset of books into different categories based on a 'crude' classification of genre i.e. whether the title was 'fiction' or 'non-fiction'.\n\nThe model's training data (discussed more below) primarily consists of 19th Century book titles from the British Library Digitised printed books (18th-19th century) collection. These books have been catalogued according to British Library cataloguing practices. The model is likely to perform worse on any book titles from earlier or later periods. While the model is multilingual, it has training data in non-English book titles; these appear much less frequently.",
"## How to use\n\nTo use this within fastai, first install version 2 of the fastai library. You can load directly from the Hugging Face hub using the 'huggingface_hub' library.",
"## Limitations and bias\n\nThe model was developed based on data from the British Library's Digitised printed books (18th-19th Century) collection. This dataset is not representative of books from the period covered with biases towards certain types (travel) and a likely absence of books that were difficult to digitise.\n\nThe formatting of the British Library books corpus titles may differ from other collections, resulting in worse performance on other collections. It is recommended to evaluate the performance of the model before applying it to your own data. Likely, this model won't perform well for contemporary book titles without further fine-tuning.",
"## Training data\n\nThe training data was created using the Zooniverse platform. British Library cataloguers carried out the majority of the annotations used as training data. More information on the process of creating the training data will be available soon.",
"### Training procedure\n\nModel training was carried out using the fastai library version 2.5.2. \n\nThe notebook using for training the model is available at: URL",
"## Eval result\n\nThe model was evaluated on a held out test set:"
] |
null |
adapter-transformers
|
# Adapter `davanstrien/book-genre-classification` for bert-base-cased
An [adapter](https://adapterhub.ml) for the `bert-base-cased` model that was trained on the [text-classification](https://adapterhub.ml/explore/text-classification/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoModelWithHeads
model = AutoModelWithHeads.from_pretrained("bert-base-cased")
adapter_name = model.load_adapter("davanstrien/book-genre-classification", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
{"tags": ["bert", "adapterhub:text-classification", "adapter-transformers"]}
|
davanstrien/book-genre-classification
| null |
[
"adapter-transformers",
"bert",
"adapterhub:text-classification",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#adapter-transformers #bert #adapterhub-text-classification #region-us
|
# Adapter 'davanstrien/book-genre-classification' for bert-base-cased
An adapter for the 'bert-base-cased' model that was trained on the text-classification dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_
Now, the adapter can be loaded and activated like this:
## Architecture & Training
## Evaluation results
|
[
"# Adapter 'davanstrien/book-genre-classification' for bert-base-cased\n\nAn adapter for the 'bert-base-cased' model that was trained on the text-classification dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
[
"TAGS\n#adapter-transformers #bert #adapterhub-text-classification #region-us \n",
"# Adapter 'davanstrien/book-genre-classification' for bert-base-cased\n\nAn adapter for the 'bert-base-cased' model that was trained on the text-classification dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. More_\n\nNow, the adapter can be loaded and activated like this:",
"## Architecture & Training",
"## Evaluation results"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext_flyswot
This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext-base-224-22k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1441
- F1: 0.9592
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 52 | 0.6833 | 0.7484 |
| No log | 2.0 | 104 | 0.3666 | 0.8750 |
| No log | 3.0 | 156 | 0.2090 | 0.9321 |
| No log | 4.0 | 208 | 0.1478 | 0.9449 |
| No log | 5.0 | 260 | 0.1002 | 0.9518 |
| No log | 6.0 | 312 | 0.1053 | 0.9506 |
| No log | 7.0 | 364 | 0.1182 | 0.9616 |
| No log | 8.0 | 416 | 0.1102 | 0.9592 |
| No log | 9.0 | 468 | 0.1262 | 0.9616 |
| 0.203 | 10.0 | 520 | 0.1286 | 0.9616 |
| 0.203 | 11.0 | 572 | 0.1355 | 0.9592 |
| 0.203 | 12.0 | 624 | 0.1299 | 0.9592 |
| 0.203 | 13.0 | 676 | 0.1154 | 0.9592 |
| 0.203 | 14.0 | 728 | 0.1385 | 0.9580 |
| 0.203 | 15.0 | 780 | 0.1330 | 0.9592 |
| 0.203 | 16.0 | 832 | 0.1390 | 0.9592 |
| 0.203 | 17.0 | 884 | 0.1386 | 0.9592 |
| 0.203 | 18.0 | 936 | 0.1390 | 0.9592 |
| 0.203 | 19.0 | 988 | 0.1409 | 0.9592 |
| 0.0006 | 20.0 | 1040 | 0.1411 | 0.9592 |
| 0.0006 | 21.0 | 1092 | 0.1413 | 0.9592 |
| 0.0006 | 22.0 | 1144 | 0.1415 | 0.9592 |
| 0.0006 | 23.0 | 1196 | 0.1426 | 0.9592 |
| 0.0006 | 24.0 | 1248 | 0.1435 | 0.9592 |
| 0.0006 | 25.0 | 1300 | 0.1438 | 0.9592 |
| 0.0006 | 26.0 | 1352 | 0.1434 | 0.9592 |
| 0.0006 | 27.0 | 1404 | 0.1437 | 0.9592 |
| 0.0006 | 28.0 | 1456 | 0.1441 | 0.9592 |
| 0.0002 | 29.0 | 1508 | 0.1440 | 0.9592 |
| 0.0002 | 30.0 | 1560 | 0.1441 | 0.9592 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["image_folder"], "metrics": ["f1"], "base_model": "facebook/convnext-base-224-22k", "model-index": [{"name": "convnext_flyswot", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "image_folder", "type": "image_folder", "args": "default"}, "metrics": [{"type": "f1", "value": 0.959245529738118, "name": "F1"}]}]}]}
|
davanstrien/convnext_flyswot
| null |
[
"transformers",
"pytorch",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:facebook/convnext-base-224-22k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #convnext #image-classification #generated_from_trainer #dataset-image_folder #base_model-facebook/convnext-base-224-22k #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
convnext\_flyswot
=================
This model is a fine-tuned version of facebook/convnext-base-224-22k on the image\_folder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1441
* F1: 0.9592
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 666
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.6
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 666\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #safetensors #convnext #image-classification #generated_from_trainer #dataset-image_folder #base_model-facebook/convnext-base-224-22k #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 666\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext_manuscript_iiif
This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext-base-224-22k) on the davanstrien/iiif_manuscripts_label_ge_50 dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5856
- F1: 0.0037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.5753 | 1.0 | 2038 | 6.4121 | 0.0016 |
| 5.9865 | 2.0 | 4076 | 5.9466 | 0.0021 |
| 5.6521 | 3.0 | 6114 | 5.7645 | 0.0029 |
| 5.3123 | 4.0 | 8152 | 5.6890 | 0.0033 |
| 5.0337 | 5.0 | 10190 | 5.6692 | 0.0034 |
| 4.743 | 6.0 | 12228 | 5.5856 | 0.0037 |
| 4.4387 | 7.0 | 14266 | 5.5969 | 0.0042 |
| 4.1422 | 8.0 | 16304 | 5.6711 | 0.0043 |
| 3.8372 | 9.0 | 18342 | 5.6761 | 0.0044 |
| 3.5244 | 10.0 | 20380 | 5.8469 | 0.0042 |
| 3.2321 | 11.0 | 22418 | 5.8774 | 0.0045 |
| 2.9004 | 12.0 | 24456 | 6.1186 | 0.0047 |
| 2.5937 | 13.0 | 26494 | 6.2398 | 0.0046 |
| 2.2983 | 14.0 | 28532 | 6.3732 | 0.0049 |
| 2.0611 | 15.0 | 30570 | 6.5024 | 0.0045 |
| 1.8153 | 16.0 | 32608 | 6.6585 | 0.0047 |
| 1.6075 | 17.0 | 34646 | 6.8333 | 0.0043 |
| 1.4342 | 18.0 | 36684 | 6.9529 | 0.0044 |
| 1.2614 | 19.0 | 38722 | 7.1129 | 0.0046 |
| 1.1463 | 20.0 | 40760 | 7.1977 | 0.0039 |
| 1.0387 | 21.0 | 42798 | 7.2700 | 0.0044 |
| 0.9635 | 22.0 | 44836 | 7.3375 | 0.0040 |
| 0.8872 | 23.0 | 46874 | 7.4003 | 0.0039 |
| 0.8156 | 24.0 | 48912 | 7.4884 | 0.0039 |
| 0.7544 | 25.0 | 50950 | 7.4764 | 0.0039 |
| 0.6893 | 26.0 | 52988 | 7.5153 | 0.0042 |
| 0.6767 | 27.0 | 55026 | 7.5427 | 0.0043 |
| 0.6098 | 28.0 | 57064 | 7.5547 | 0.0042 |
| 0.5871 | 29.0 | 59102 | 7.5533 | 0.0041 |
| 0.5696 | 30.0 | 61140 | 7.5595 | 0.0041 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "metrics": ["f1"], "base_model": "facebook/convnext-base-224-22k", "model-index": [{"name": "convnext_manuscript_iiif", "results": []}]}
|
davanstrien/convnext_manuscript_iiif
| null |
[
"transformers",
"pytorch",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnext-base-224-22k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #convnext #image-classification #generated_from_trainer #base_model-facebook/convnext-base-224-22k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
convnext\_manuscript\_iiif
==========================
This model is a fine-tuned version of facebook/convnext-base-224-22k on the davanstrien/iiif\_manuscripts\_label\_ge\_50 dataset.
It achieves the following results on the evaluation set:
* Loss: 5.5856
* F1: 0.0037
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 1337
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 30.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.18.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.6
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 1337\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.18.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #safetensors #convnext #image-classification #generated_from_trainer #base_model-facebook/convnext-base-224-22k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 1337\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 30.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.18.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
object-detection
|
transformers
|
# detr_beyond_words (WIP)
[facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) fine tuned on [Beyond Words](https://github.com/LibraryOfCongress/newspaper-navigator/tree/master/beyond_words_data).
|
{"license": "mit", "tags": ["object-detection"], "widget": [{"src": "https://huggingface.co/davanstrien/detr_beyond_words/resolve/main/19.jpg", "example_title": "page"}, {"src": "https://huggingface.co/davanstrien/detr_beyond_words/resolve/main/65.jpg", "example_title": "page2"}]}
|
davanstrien/detr_beyond_words
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #safetensors #detr #object-detection #license-mit #endpoints_compatible #region-us
|
# detr_beyond_words (WIP)
facebook/detr-resnet-50 fine tuned on Beyond Words.
|
[
"# detr_beyond_words (WIP) \n\nfacebook/detr-resnet-50 fine tuned on Beyond Words."
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #detr #object-detection #license-mit #endpoints_compatible #region-us \n",
"# detr_beyond_words (WIP) \n\nfacebook/detr-resnet-50 fine tuned on Beyond Words."
] |
null | null |
# flyswot
## Model description
In progress model for detecting 'fake' flysheets
## Intended uses & limitations
Not currently intended for public consumption...
#### Limitations and bias
Not currently intended for public consumption...
## Training data
TODO
## Eval results
|
{}
|
davanstrien/flyswot-test
| null |
[
"onnx",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#onnx #region-us
|
# flyswot
## Model description
In progress model for detecting 'fake' flysheets
## Intended uses & limitations
Not currently intended for public consumption...
#### Limitations and bias
Not currently intended for public consumption...
## Training data
TODO
## Eval results
|
[
"# flyswot",
"## Model description\n\nIn progress model for detecting 'fake' flysheets",
"## Intended uses & limitations\n\nNot currently intended for public consumption...",
"#### Limitations and bias\n\nNot currently intended for public consumption...",
"## Training data\n\nTODO",
"## Eval results"
] |
[
"TAGS\n#onnx #region-us \n",
"# flyswot",
"## Model description\n\nIn progress model for detecting 'fake' flysheets",
"## Intended uses & limitations\n\nNot currently intended for public consumption...",
"#### Limitations and bias\n\nNot currently intended for public consumption...",
"## Training data\n\nTODO",
"## Eval results"
] |
null | null |
TODO
## Model description
In progress model for detecting 'fake' flysheets
## Intended uses & limitations
Not currently intended for public consumption...
## Limitations and bias
Not currently intended for public consumption...
## Training data
## Eval results
|
{}
|
davanstrien/flyswot
| null |
[
"onnx",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#onnx #region-us
|
TODO
## Model description
In progress model for detecting 'fake' flysheets
## Intended uses & limitations
Not currently intended for public consumption...
## Limitations and bias
Not currently intended for public consumption...
## Training data
## Eval results
|
[
"## Model description\n\nIn progress model for detecting 'fake' flysheets",
"## Intended uses & limitations\n\nNot currently intended for public consumption...",
"## Limitations and bias\n\nNot currently intended for public consumption...",
"## Training data",
"## Eval results"
] |
[
"TAGS\n#onnx #region-us \n",
"## Model description\n\nIn progress model for detecting 'fake' flysheets",
"## Intended uses & limitations\n\nNot currently intended for public consumption...",
"## Limitations and bias\n\nNot currently intended for public consumption...",
"## Training data",
"## Eval results"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flyswot_iiif
This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext-base-224-22k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1280
- F1: 0.0034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 8.5184 | 0.26 | 500 | 7.9280 | 0.0005 |
| 7.7409 | 0.52 | 1000 | 7.5824 | 0.0007 |
| 7.4649 | 0.78 | 1500 | 7.3841 | 0.0010 |
| 7.3285 | 1.04 | 2000 | 7.2652 | 0.0012 |
| 7.1404 | 1.3 | 2500 | 7.1559 | 0.0014 |
| 7.0322 | 1.56 | 3000 | 7.0551 | 0.0016 |
| 6.9197 | 1.82 | 3500 | 6.9449 | 0.0019 |
| 6.7822 | 2.09 | 4000 | 6.8773 | 0.0018 |
| 6.6506 | 2.35 | 4500 | 6.7980 | 0.0020 |
| 6.5811 | 2.61 | 5000 | 6.7382 | 0.0022 |
| 6.538 | 2.87 | 5500 | 6.6582 | 0.0022 |
| 6.4136 | 3.13 | 6000 | 6.6013 | 0.0024 |
| 6.3325 | 3.39 | 6500 | 6.5369 | 0.0024 |
| 6.2566 | 3.65 | 7000 | 6.4875 | 0.0025 |
| 6.2285 | 3.91 | 7500 | 6.4342 | 0.0027 |
| 6.1281 | 4.17 | 8000 | 6.4066 | 0.0027 |
| 6.0762 | 4.43 | 8500 | 6.3674 | 0.0027 |
| 6.0309 | 4.69 | 9000 | 6.3336 | 0.0027 |
| 6.0123 | 4.95 | 9500 | 6.2932 | 0.0030 |
| 5.9089 | 5.21 | 10000 | 6.2835 | 0.0029 |
| 5.8901 | 5.47 | 10500 | 6.2481 | 0.0030 |
| 5.86 | 5.74 | 11000 | 6.2295 | 0.0030 |
| 5.8586 | 6.0 | 11500 | 6.2068 | 0.0033 |
| 5.7768 | 6.26 | 12000 | 6.1937 | 0.0031 |
| 5.7591 | 6.52 | 12500 | 6.1916 | 0.0032 |
| 5.7443 | 6.78 | 13000 | 6.1579 | 0.0033 |
| 5.7125 | 7.04 | 13500 | 6.1478 | 0.0033 |
| 5.6751 | 7.3 | 14000 | 6.1379 | 0.0035 |
| 5.6648 | 7.56 | 14500 | 6.1304 | 0.0035 |
| 5.6644 | 7.82 | 15000 | 6.1280 | 0.0034 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "facebook/convnext-base-224-22k", "model-index": [{"name": "flyswot_iiif", "results": []}]}
|
davanstrien/flyswot_iiif
| null |
[
"transformers",
"pytorch",
"convnext",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnext-base-224-22k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #convnext #image-classification #generated_from_trainer #base_model-facebook/convnext-base-224-22k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
flyswot\_iiif
=============
This model is a fine-tuned version of facebook/convnext-base-224-22k on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 6.1280
* F1: 0.0034
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 666
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
* mixed\_precision\_training: Native AMP
* label\_smoothing\_factor: 0.1
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.6
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 666\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8\n* mixed\\_precision\\_training: Native AMP\n* label\\_smoothing\\_factor: 0.1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #convnext #image-classification #generated_from_trainer #base_model-facebook/convnext-base-224-22k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 666\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8\n* mixed\\_precision\\_training: Native AMP\n* label\\_smoothing\\_factor: 0.1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flyswot_test
This model is a fine-tuned version of [facebook/convnext-base-224-22k](https://huggingface.co/facebook/convnext-base-224-22k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1518
- eval_f1: 0.9595
- eval_runtime: 5.9337
- eval_samples_per_second: 69.603
- eval_steps_per_second: 2.191
- epoch: 7.0
- step: 364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["image_folder"], "base_model": "facebook/convnext-base-224-22k", "model-index": [{"name": "flyswot_test", "results": []}]}
|
davanstrien/flyswot_test
| null |
[
"transformers",
"pytorch",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"base_model:facebook/convnext-base-224-22k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #convnext #image-classification #generated_from_trainer #dataset-image_folder #base_model-facebook/convnext-base-224-22k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# flyswot_test
This model is a fine-tuned version of facebook/convnext-base-224-22k on the image_folder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1518
- eval_f1: 0.9595
- eval_runtime: 5.9337
- eval_samples_per_second: 69.603
- eval_steps_per_second: 2.191
- epoch: 7.0
- step: 364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
[
"# flyswot_test\n\nThis model is a fine-tuned version of facebook/convnext-base-224-22k on the image_folder dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.1518\n- eval_f1: 0.9595\n- eval_runtime: 5.9337\n- eval_samples_per_second: 69.603\n- eval_steps_per_second: 2.191\n- epoch: 7.0\n- step: 364",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 666\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 40\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #convnext #image-classification #generated_from_trainer #dataset-image_folder #base_model-facebook/convnext-base-224-22k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# flyswot_test\n\nThis model is a fine-tuned version of facebook/convnext-base-224-22k on the image_folder dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.1518\n- eval_f1: 0.9595\n- eval_runtime: 5.9337\n- eval_samples_per_second: 69.603\n- eval_steps_per_second: 2.191\n- epoch: 7.0\n- step: 364",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 666\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 40\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.6"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iiif_manuscript_vit
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5684
- F1: 0.5996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5639 | 1.0 | 2269 | 0.5822 | 0.5516 |
| 0.5834 | 2.0 | 4538 | 0.5825 | 0.5346 |
| 0.5778 | 3.0 | 6807 | 0.5794 | 0.6034 |
| 0.5735 | 4.0 | 9076 | 0.5742 | 0.5713 |
| 0.5731 | 5.0 | 11345 | 0.5745 | 0.6008 |
| 0.5701 | 6.0 | 13614 | 0.5729 | 0.5499 |
| 0.5696 | 7.0 | 15883 | 0.5717 | 0.5952 |
| 0.5683 | 8.0 | 18152 | 0.5680 | 0.6005 |
| 0.5648 | 9.0 | 20421 | 0.5679 | 0.5967 |
| 0.564 | 10.0 | 22690 | 0.5684 | 0.5996 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "iiif_manuscript_vit", "results": []}]}
|
davanstrien/iiif_manuscript_vit
| null |
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
iiif\_manuscript\_vit
=====================
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5684
* F1: 0.5996
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
* label\_smoothing\_factor: 0.1
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP\n* label\\_smoothing\\_factor: 0.1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP\n* label\\_smoothing\\_factor: 0.1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
null |
generic
|
# TODO
-
-
-
-
|
{"library_name": "generic", "tags": ["chemistry"]}
|
davanstrien/test
| null |
[
"generic",
"chemistry",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#generic #chemistry #region-us
|
# TODO
-
-
-
-
|
[
"# TODO\n-\n-\n-\n-"
] |
[
"TAGS\n#generic #chemistry #region-us \n",
"# TODO\n-\n-\n-\n-"
] |
null |
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-manuscripts
This model is a fine-tuned version of [facebook/vit-mae-base](https://huggingface.co/facebook/vit-mae-base) on the davanstrien/manuscript_iiif_test dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5303 | 1.0 | 34 | 0.5134 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["masked-auto-encoding", "generated_from_trainer"], "base_model": "facebook/vit-mae-base", "model-index": [{"name": "vit-manuscripts", "results": []}]}
|
davanstrien/vit-manuscripts
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit_mae",
"pretraining",
"masked-auto-encoding",
"generated_from_trainer",
"base_model:facebook/vit-mae-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #vit_mae #pretraining #masked-auto-encoding #generated_from_trainer #base_model-facebook/vit-mae-base #license-apache-2.0 #endpoints_compatible #region-us
|
vit-manuscripts
===============
This model is a fine-tuned version of facebook/vit-mae-base on the davanstrien/manuscript\_iiif\_test dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5177
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 1337
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.05
* num\_epochs: 1.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 1337\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 1.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #vit_mae #pretraining #masked-auto-encoding #generated_from_trainer #base_model-facebook/vit-mae-base #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 1337\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 1.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_flyswot_test
This model is a fine-tuned version of [](https://huggingface.co/) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4777
- F1: 0.8492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 666
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 52 | 1.2007 | 0.3533 |
| No log | 2.0 | 104 | 1.0037 | 0.5525 |
| No log | 3.0 | 156 | 0.8301 | 0.6318 |
| No log | 4.0 | 208 | 0.7224 | 0.6946 |
| No log | 5.0 | 260 | 0.7298 | 0.7145 |
| No log | 6.0 | 312 | 0.6328 | 0.7729 |
| No log | 7.0 | 364 | 0.6010 | 0.7992 |
| No log | 8.0 | 416 | 0.5174 | 0.8364 |
| No log | 9.0 | 468 | 0.5084 | 0.8479 |
| 0.6372 | 10.0 | 520 | 0.4777 | 0.8492 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"tags": ["generated_from_trainer"], "datasets": ["image_folder"], "metrics": ["f1"], "model-index": [{"name": "vit_flyswot_test", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "image_folder", "type": "image_folder", "args": "default"}, "metrics": [{"type": "f1", "value": 0.849172221610369, "name": "F1"}]}]}]}
|
davanstrien/vit_flyswot_test
| null |
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #vit #image-classification #generated_from_trainer #dataset-image_folder #model-index #autotrain_compatible #endpoints_compatible #region-us
|
vit\_flyswot\_test
==================
This model is a fine-tuned version of [](URL on the image\_folder dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4777
* F1: 0.8492
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 666
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.6
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 666\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #vit #image-classification #generated_from_trainer #dataset-image_folder #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 666\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9199
- Mae: 0.4756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1705 | 1.0 | 235 | 0.9985 | 0.5854 |
| 0.9721 | 2.0 | 470 | 0.9199 | 0.4756 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc-en", "results": []}]}
|
daveccampbell/xlm-roberta-base-finetuned-marc-en
| null |
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
xlm-roberta-base-finetuned-marc-en
==================================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9199
* Mae: 0.4756
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
**Note**: This model & model card are based on the [finetuned XLM-T for Sentiment Analysis](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment)
# twitter-XLM-roBERTa-base for Emotion Analysis
This is a XLM-roBERTa-base model trained on ~198M tweets and finetuned for emotion analysis on Spanish language. This model was presented to EmoEvalEs competition, part of [IberLEF 2021 Conference](https://sites.google.com/view/iberlef2021/), where the proposed task was the classification of Spanish tweets between seven different classes: *anger*, *disgust*, *fear*, *joy*, *sadness*, *surprise*, and *other*. We achieved the first position in the competition with a macro-averaged F1 score of 71.70%.
- [Our code for EmoEvalEs submission](https://github.com/gsi-upm/emoevales-iberlef2021).
- [EmoEvalEs Dataset](https://github.com/pendrag/EmoEvalEs)
## Example Pipeline with a [Tweet from @JaSantaolalla](https://twitter.com/JaSantaolalla/status/1398383243645177860)
```python
from transformers import pipeline
model_path = "daveni/twitter-xlm-roberta-emotion-es"
emotion_analysis = pipeline("text-classification", framework="pt", model=model_path, tokenizer=model_path)
emotion_analysis("Einstein dijo: Solo hay dos cosas infinitas, el universo y los pinches anuncios de bitcoin en Twitter. Paren ya carajo aaaaaaghhgggghhh me quiero murir")
```
```
[{'label': 'anger', 'score': 0.48307016491889954}]
```
## Full classification example
```python
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig
import numpy as np
from scipy.special import softmax
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
model_path = "daveni/twitter-xlm-roberta-emotion-es"
tokenizer = AutoTokenizer.from_pretrained(model_path )
config = AutoConfig.from_pretrained(model_path )
# PT
model = AutoModelForSequenceClassification.from_pretrained(model_path )
text = "Se ha quedao bonito día para publicar vídeo, ¿no? Hoy del tema más diferente que hemos tocado en el canal."
text = preprocess(text)
print(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
Se ha quedao bonito día para publicar vídeo, ¿no? Hoy del tema más diferente que hemos tocado en el canal.
1) joy 0.7887
2) others 0.1679
3) surprise 0.0152
4) sadness 0.0145
5) anger 0.0077
6) disgust 0.0033
7) fear 0.0027
```
#### Limitations and bias
- The dataset we used for finetuning was unbalanced, where almost half of the records belonged to the *other* class so there might be bias towards this class.
## Training data
Pretrained weights were left identical to the original model released by [cardiffnlp](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base). We used the [EmoEvalEs Dataset](https://github.com/pendrag/EmoEvalEs) for finetuning.
### BibTeX entry and citation info
```bibtex
@inproceedings{vera2021gsi,
title={GSI-UPM at IberLEF2021: Emotion Analysis of Spanish Tweets by Fine-tuning the XLM-RoBERTa Language Model},
author={Vera, D and Araque, O and Iglesias, CA},
booktitle={Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2021). CEUR Workshop Proceedings, CEUR-WS, M{\'a}laga, Spain},
year={2021}
}
```
|
{"language": ["es"], "tags": ["Emotion Analysis"]}
|
daveni/twitter-xlm-roberta-emotion-es
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"Emotion Analysis",
"es",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"es"
] |
TAGS
#transformers #pytorch #xlm-roberta #text-classification #Emotion Analysis #es #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Note: This model & model card are based on the finetuned XLM-T for Sentiment Analysis
# twitter-XLM-roBERTa-base for Emotion Analysis
This is a XLM-roBERTa-base model trained on ~198M tweets and finetuned for emotion analysis on Spanish language. This model was presented to EmoEvalEs competition, part of IberLEF 2021 Conference, where the proposed task was the classification of Spanish tweets between seven different classes: *anger*, *disgust*, *fear*, *joy*, *sadness*, *surprise*, and *other*. We achieved the first position in the competition with a macro-averaged F1 score of 71.70%.
- Our code for EmoEvalEs submission.
- EmoEvalEs Dataset
## Example Pipeline with a Tweet from @JaSantaolalla
## Full classification example
Output:
#### Limitations and bias
- The dataset we used for finetuning was unbalanced, where almost half of the records belonged to the *other* class so there might be bias towards this class.
## Training data
Pretrained weights were left identical to the original model released by cardiffnlp. We used the EmoEvalEs Dataset for finetuning.
### BibTeX entry and citation info
|
[
"# twitter-XLM-roBERTa-base for Emotion Analysis\nThis is a XLM-roBERTa-base model trained on ~198M tweets and finetuned for emotion analysis on Spanish language. This model was presented to EmoEvalEs competition, part of IberLEF 2021 Conference, where the proposed task was the classification of Spanish tweets between seven different classes: *anger*, *disgust*, *fear*, *joy*, *sadness*, *surprise*, and *other*. We achieved the first position in the competition with a macro-averaged F1 score of 71.70%. \n- Our code for EmoEvalEs submission.\n- EmoEvalEs Dataset",
"## Example Pipeline with a Tweet from @JaSantaolalla",
"## Full classification example\n\nOutput:",
"#### Limitations and bias\n\n- The dataset we used for finetuning was unbalanced, where almost half of the records belonged to the *other* class so there might be bias towards this class.",
"## Training data\n\nPretrained weights were left identical to the original model released by cardiffnlp. We used the EmoEvalEs Dataset for finetuning.",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #Emotion Analysis #es #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# twitter-XLM-roBERTa-base for Emotion Analysis\nThis is a XLM-roBERTa-base model trained on ~198M tweets and finetuned for emotion analysis on Spanish language. This model was presented to EmoEvalEs competition, part of IberLEF 2021 Conference, where the proposed task was the classification of Spanish tweets between seven different classes: *anger*, *disgust*, *fear*, *joy*, *sadness*, *surprise*, and *other*. We achieved the first position in the competition with a macro-averaged F1 score of 71.70%. \n- Our code for EmoEvalEs submission.\n- EmoEvalEs Dataset",
"## Example Pipeline with a Tweet from @JaSantaolalla",
"## Full classification example\n\nOutput:",
"#### Limitations and bias\n\n- The dataset we used for finetuning was unbalanced, where almost half of the records belonged to the *other* class so there might be bias towards this class.",
"## Training data\n\nPretrained weights were left identical to the original model released by cardiffnlp. We used the EmoEvalEs Dataset for finetuning.",
"### BibTeX entry and citation info"
] |
null | null |
Relevance prediction model
|
{}
|
davinan/relevance_prediction
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
Relevance prediction model
|
[] |
[
"TAGS\n#region-us \n"
] |
text-generation
|
transformers
|
A small french language model for french text generation (and possibly more NLP tasks...)
**Introduction**
This french gpt2 model is based on openai GPT-2 small model.
It was trained on a <b>very small (190Mb) dataset </b> from french wikipedia using Transfer Learning and Fine-tuning techniques in just over a day, on one Colab pro with 1GPU 16GB.
It was created applying the recept of <b>Pierre Guillou</b>
See https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787
It is a proof-of-concept that makes possible to get a language model in any language with low ressources.
It was fine-tuned from the English pre-trained GPT-2 small using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the fastai v2 Deep Learning framework. All the fine-tuning fastai v2 techniques were used.
It is now available on Hugging Face. For further information or requests, please go to "Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)".
Model migth be improved by using larger dataset under larger powerful training infrastructure. At least this one can be used for small finetuning experimentation (i.e with aitextgen).
PS : I've lost the metrics but it speaks french with some minor grammar issues, coherence of text is somehow limited.
|
{"language": "fr", "tags": ["french", "gpt2", "model"]}
|
dbddv01/gpt2-french-small
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"french",
"model",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #jax #safetensors #gpt2 #text-generation #french #model #fr #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
A small french language model for french text generation (and possibly more NLP tasks...)
Introduction
This french gpt2 model is based on openai GPT-2 small model.
It was trained on a <b>very small (190Mb) dataset </b> from french wikipedia using Transfer Learning and Fine-tuning techniques in just over a day, on one Colab pro with 1GPU 16GB.
It was created applying the recept of <b>Pierre Guillou</b>
See URL
It is a proof-of-concept that makes possible to get a language model in any language with low ressources.
It was fine-tuned from the English pre-trained GPT-2 small using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the fastai v2 Deep Learning framework. All the fine-tuning fastai v2 techniques were used.
It is now available on Hugging Face. For further information or requests, please go to "Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)".
Model migth be improved by using larger dataset under larger powerful training infrastructure. At least this one can be used for small finetuning experimentation (i.e with aitextgen).
PS : I've lost the metrics but it speaks french with some minor grammar issues, coherence of text is somehow limited.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #gpt2 #text-generation #french #model #fr #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-italian-robust
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the Common Voice 7 & Libri Speech datasets.
It achieves the following results on the evaluation set:
- Loss: 0.2428
- Wer: 0.2960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.07 | 400 | 1.0053 | 0.8058 |
| 1.5087 | 0.13 | 800 | 0.9127 | 0.8104 |
| 0.9552 | 0.2 | 1200 | 1.0360 | 0.8836 |
| 0.9555 | 0.27 | 1600 | 0.9980 | 0.8577 |
| 1.0259 | 0.34 | 2000 | 1.0103 | 0.8842 |
| 1.0259 | 0.4 | 2400 | 0.9119 | 0.8466 |
| 1.0365 | 0.47 | 2800 | 0.9000 | 0.8281 |
| 1.0069 | 0.54 | 3200 | 0.7976 | 0.7875 |
| 0.9688 | 0.61 | 3600 | 0.8126 | 0.8051 |
| 0.9638 | 0.67 | 4000 | 0.7921 | 0.7903 |
| 0.9638 | 0.74 | 4400 | 0.7703 | 0.7783 |
| 0.9327 | 0.81 | 4800 | 0.7253 | 0.7463 |
| 0.8992 | 0.88 | 5200 | 0.6841 | 0.7171 |
| 0.8693 | 0.94 | 5600 | 0.6867 | 0.7250 |
| 0.8433 | 1.01 | 6000 | 0.7077 | 0.7302 |
| 0.8433 | 1.08 | 6400 | 0.6685 | 0.7091 |
| 0.8499 | 1.14 | 6800 | 0.6355 | 0.6825 |
| 0.8159 | 1.21 | 7200 | 0.6283 | 0.6800 |
| 0.8001 | 1.28 | 7600 | 0.6288 | 0.6743 |
| 0.7883 | 1.35 | 8000 | 0.5995 | 0.6633 |
| 0.7883 | 1.41 | 8400 | 0.6195 | 0.6726 |
| 0.7863 | 1.48 | 8800 | 0.6039 | 0.6588 |
| 0.7713 | 1.55 | 9200 | 0.5842 | 0.6490 |
| 0.7572 | 1.62 | 9600 | 0.5975 | 0.6533 |
| 0.7442 | 1.68 | 10000 | 0.5508 | 0.6233 |
| 0.7442 | 1.75 | 10400 | 0.5521 | 0.6209 |
| 0.7296 | 1.82 | 10800 | 0.5760 | 0.6245 |
| 0.7205 | 1.89 | 11200 | 0.5593 | 0.6144 |
| 0.7106 | 1.95 | 11600 | 0.5672 | 0.6220 |
| 0.7146 | 2.02 | 12000 | 0.5134 | 0.5911 |
| 0.7146 | 2.09 | 12400 | 0.5069 | 0.5811 |
| 0.6944 | 2.15 | 12800 | 0.5022 | 0.5962 |
| 0.6817 | 2.22 | 13200 | 0.4989 | 0.5813 |
| 0.6721 | 2.29 | 13600 | 0.4941 | 0.5742 |
| 0.6774 | 2.36 | 14000 | 0.4775 | 0.5676 |
| 0.6774 | 2.42 | 14400 | 0.4694 | 0.5525 |
| 0.6621 | 2.49 | 14800 | 0.4720 | 0.5514 |
| 0.6599 | 2.56 | 15200 | 0.4714 | 0.5553 |
| 0.6591 | 2.63 | 15600 | 0.4578 | 0.5397 |
| 0.645 | 2.69 | 16000 | 0.4619 | 0.5452 |
| 0.645 | 2.76 | 16400 | 0.4578 | 0.5343 |
| 0.6431 | 2.83 | 16800 | 0.4514 | 0.5328 |
| 0.636 | 2.9 | 17200 | 0.4526 | 0.5325 |
| 0.6433 | 2.96 | 17600 | 0.4561 | 0.5325 |
| 0.6356 | 3.03 | 18000 | 0.4386 | 0.5191 |
| 0.6356 | 3.1 | 18400 | 0.4291 | 0.5065 |
| 0.6175 | 3.16 | 18800 | 0.4306 | 0.5170 |
| 0.6187 | 3.23 | 19200 | 0.4256 | 0.5036 |
| 0.607 | 3.3 | 19600 | 0.4198 | 0.5027 |
| 0.6004 | 3.37 | 20000 | 0.4149 | 0.4906 |
| 0.6004 | 3.43 | 20400 | 0.4114 | 0.4902 |
| 0.6002 | 3.5 | 20800 | 0.4116 | 0.4967 |
| 0.5926 | 3.57 | 21200 | 0.4066 | 0.4843 |
| 0.5836 | 3.64 | 21600 | 0.3956 | 0.4791 |
| 0.588 | 3.7 | 22000 | 0.3941 | 0.4729 |
| 0.588 | 3.77 | 22400 | 0.3972 | 0.4799 |
| 0.5739 | 3.84 | 22800 | 0.4018 | 0.4790 |
| 0.5778 | 3.91 | 23200 | 0.3936 | 0.4750 |
| 0.5768 | 3.97 | 23600 | 0.3936 | 0.4751 |
| 0.5651 | 4.04 | 24000 | 0.3953 | 0.4706 |
| 0.5651 | 4.11 | 24400 | 0.3906 | 0.4659 |
| 0.5704 | 4.17 | 24800 | 0.3807 | 0.4557 |
| 0.5594 | 4.24 | 25200 | 0.3817 | 0.4610 |
| 0.5509 | 4.31 | 25600 | 0.3755 | 0.4553 |
| 0.5439 | 4.38 | 26000 | 0.3705 | 0.4471 |
| 0.5439 | 4.44 | 26400 | 0.3744 | 0.4487 |
| 0.5426 | 4.51 | 26800 | 0.3716 | 0.4483 |
| 0.5393 | 4.58 | 27200 | 0.3600 | 0.4356 |
| 0.5408 | 4.65 | 27600 | 0.3573 | 0.4307 |
| 0.5327 | 4.71 | 28000 | 0.3638 | 0.4382 |
| 0.5327 | 4.78 | 28400 | 0.3587 | 0.4316 |
| 0.5324 | 4.85 | 28800 | 0.3598 | 0.4290 |
| 0.5378 | 4.91 | 29200 | 0.3508 | 0.4243 |
| 0.5246 | 4.98 | 29600 | 0.3522 | 0.4260 |
| 0.5284 | 5.05 | 30000 | 0.3520 | 0.4268 |
| 0.5284 | 5.12 | 30400 | 0.3506 | 0.4224 |
| 0.5154 | 5.18 | 30800 | 0.3556 | 0.4223 |
| 0.5138 | 5.25 | 31200 | 0.3526 | 0.4276 |
| 0.51 | 5.32 | 31600 | 0.3440 | 0.4220 |
| 0.5065 | 5.39 | 32000 | 0.3367 | 0.4120 |
| 0.5065 | 5.45 | 32400 | 0.3406 | 0.4136 |
| 0.5087 | 5.52 | 32800 | 0.3370 | 0.4125 |
| 0.503 | 5.59 | 33200 | 0.3387 | 0.4134 |
| 0.5085 | 5.66 | 33600 | 0.3346 | 0.4068 |
| 0.5044 | 5.72 | 34000 | 0.3325 | 0.4057 |
| 0.5044 | 5.79 | 34400 | 0.3304 | 0.4026 |
| 0.4879 | 5.86 | 34800 | 0.3274 | 0.4002 |
| 0.4924 | 5.92 | 35200 | 0.3286 | 0.3980 |
| 0.4991 | 5.99 | 35600 | 0.3231 | 0.3952 |
| 0.487 | 6.06 | 36000 | 0.3324 | 0.4005 |
| 0.487 | 6.13 | 36400 | 0.3264 | 0.3952 |
| 0.4754 | 6.19 | 36800 | 0.3234 | 0.3905 |
| 0.4683 | 6.26 | 37200 | 0.3149 | 0.3840 |
| 0.4653 | 6.33 | 37600 | 0.3122 | 0.3824 |
| 0.4667 | 6.4 | 38000 | 0.3151 | 0.3855 |
| 0.4667 | 6.46 | 38400 | 0.3217 | 0.3859 |
| 0.4628 | 6.53 | 38800 | 0.3085 | 0.3831 |
| 0.4644 | 6.6 | 39200 | 0.3121 | 0.3791 |
| 0.4612 | 6.67 | 39600 | 0.3093 | 0.3790 |
| 0.4552 | 6.73 | 40000 | 0.3087 | 0.3749 |
| 0.4552 | 6.8 | 40400 | 0.3027 | 0.3679 |
| 0.4544 | 6.87 | 40800 | 0.3048 | 0.3672 |
| 0.4507 | 6.93 | 41200 | 0.2963 | 0.3614 |
| 0.4489 | 7.0 | 41600 | 0.3086 | 0.3718 |
| 0.4367 | 7.07 | 42000 | 0.3100 | 0.3754 |
| 0.4367 | 7.14 | 42400 | 0.3057 | 0.3701 |
| 0.4376 | 7.2 | 42800 | 0.2930 | 0.3614 |
| 0.428 | 7.27 | 43200 | 0.2907 | 0.3516 |
| 0.4241 | 7.34 | 43600 | 0.2916 | 0.3590 |
| 0.4312 | 7.41 | 44000 | 0.2904 | 0.3523 |
| 0.4312 | 7.47 | 44400 | 0.2908 | 0.3476 |
| 0.4292 | 7.54 | 44800 | 0.2858 | 0.3467 |
| 0.426 | 7.61 | 45200 | 0.2864 | 0.3484 |
| 0.4225 | 7.68 | 45600 | 0.2820 | 0.3441 |
| 0.422 | 7.74 | 46000 | 0.2834 | 0.3441 |
| 0.422 | 7.81 | 46400 | 0.2784 | 0.3420 |
| 0.4158 | 7.88 | 46800 | 0.2814 | 0.3390 |
| 0.4139 | 7.94 | 47200 | 0.2777 | 0.3384 |
| 0.4076 | 8.01 | 47600 | 0.2741 | 0.3381 |
| 0.3997 | 8.08 | 48000 | 0.2738 | 0.3320 |
| 0.3997 | 8.15 | 48400 | 0.2720 | 0.3303 |
| 0.4009 | 8.21 | 48800 | 0.2705 | 0.3357 |
| 0.3928 | 8.28 | 49200 | 0.2708 | 0.3265 |
| 0.3923 | 8.35 | 49600 | 0.2678 | 0.3283 |
| 0.3897 | 8.42 | 50000 | 0.2649 | 0.3241 |
| 0.3897 | 8.48 | 50400 | 0.2640 | 0.3218 |
| 0.3879 | 8.55 | 50800 | 0.2616 | 0.3197 |
| 0.3805 | 8.62 | 51200 | 0.2599 | 0.3170 |
| 0.3874 | 8.69 | 51600 | 0.2592 | 0.3168 |
| 0.3799 | 8.75 | 52000 | 0.2589 | 0.3157 |
| 0.3799 | 8.82 | 52400 | 0.2566 | 0.3137 |
| 0.3834 | 8.89 | 52800 | 0.2552 | 0.3141 |
| 0.3811 | 8.95 | 53200 | 0.2523 | 0.3108 |
| 0.3821 | 9.02 | 53600 | 0.2539 | 0.3112 |
| 0.3636 | 9.09 | 54000 | 0.2529 | 0.3070 |
| 0.3636 | 9.16 | 54400 | 0.2500 | 0.3078 |
| 0.3706 | 9.22 | 54800 | 0.2510 | 0.3067 |
| 0.367 | 9.29 | 55200 | 0.2497 | 0.3069 |
| 0.3618 | 9.36 | 55600 | 0.2493 | 0.3043 |
| 0.3624 | 9.43 | 56000 | 0.2491 | 0.3040 |
| 0.3624 | 9.49 | 56400 | 0.2466 | 0.3016 |
| 0.3557 | 9.56 | 56800 | 0.2460 | 0.3014 |
| 0.3536 | 9.63 | 57200 | 0.2470 | 0.2997 |
| 0.3584 | 9.7 | 57600 | 0.2441 | 0.2989 |
| 0.3563 | 9.76 | 58000 | 0.2442 | 0.2970 |
| 0.3563 | 9.83 | 58400 | 0.2436 | 0.2966 |
| 0.3492 | 9.9 | 58800 | 0.2431 | 0.2967 |
| 0.3483 | 9.96 | 59200 | 0.2428 | 0.2960 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["it"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-1b - Italian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "it"}, "metrics": [{"type": "wer", "value": 32.74, "name": "Test WER"}, {"type": "cer", "value": 7.83, "name": "Test CER"}, {"type": "wer", "value": 19.55, "name": "Test WER (+LM)"}, {"type": "cer", "value": 5.59, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "it"}, "metrics": [{"type": "wer", "value": 43.23, "name": "Test WER"}, {"type": "cer", "value": 13.37, "name": "Test CER"}, {"type": "wer", "value": 27.51, "name": "Test WER (+LM)"}, {"type": "cer", "value": 10.69, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "it"}, "metrics": [{"type": "wer", "value": 51.12, "name": "Test WER"}]}]}]}
|
dbdmg/wav2vec2-xls-r-1b-italian-robust
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"it"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #it #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
wav2vec2-xls-r-1b-italian-robust
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the Common Voice 7 & Libri Speech datasets.
It achieves the following results on the evaluation set:
* Loss: 0.2428
* Wer: 0.2960
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 32
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #it #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-italian-robust
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Italian splits of the following datasets:
- Mozilla Foundation Common Voice V7 dataset
- [LibriSpeech multilingual](http://www.openslr.org/94)
- [TED multilingual](https://www.openslr.org/100/)
- [Voxforge](http://www.voxforge.org/it/Downloads)
- [M-AILABS Speech Dataset](https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/)
- [EuroParl-ST](https://www.mllp.upv.es/europarl-st/)
- [EMOVO](http://voice.fub.it/activities/corpora/emovo/index.html)
- [MSPKA](http://www.mspkacorpus.it/)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.06 | 400 | 0.7508 | 0.7354 |
| 2.3127 | 0.11 | 800 | 0.5888 | 0.5882 |
| 0.7256 | 0.17 | 1200 | 0.5121 | 0.5247 |
| 0.6692 | 0.22 | 1600 | 0.4774 | 0.5028 |
| 0.6384 | 0.28 | 2000 | 0.4832 | 0.4885 |
| 0.6384 | 0.33 | 2400 | 0.4410 | 0.4581 |
| 0.6199 | 0.39 | 2800 | 0.4160 | 0.4331 |
| 0.5972 | 0.44 | 3200 | 0.4136 | 0.4275 |
| 0.6048 | 0.5 | 3600 | 0.4362 | 0.4538 |
| 0.5627 | 0.55 | 4000 | 0.4313 | 0.4469 |
| 0.5627 | 0.61 | 4400 | 0.4425 | 0.4579 |
| 0.5855 | 0.66 | 4800 | 0.3859 | 0.4133 |
| 0.5702 | 0.72 | 5200 | 0.3974 | 0.4097 |
| 0.55 | 0.77 | 5600 | 0.3931 | 0.4134 |
| 0.5624 | 0.83 | 6000 | 0.3900 | 0.4126 |
| 0.5624 | 0.88 | 6400 | 0.3622 | 0.3899 |
| 0.5615 | 0.94 | 6800 | 0.3755 | 0.4067 |
| 0.5472 | 0.99 | 7200 | 0.3980 | 0.4284 |
| 0.5663 | 1.05 | 7600 | 0.3553 | 0.3782 |
| 0.5189 | 1.1 | 8000 | 0.3538 | 0.3726 |
| 0.5189 | 1.16 | 8400 | 0.3425 | 0.3624 |
| 0.518 | 1.21 | 8800 | 0.3431 | 0.3651 |
| 0.5399 | 1.27 | 9200 | 0.3442 | 0.3573 |
| 0.5303 | 1.32 | 9600 | 0.3241 | 0.3404 |
| 0.5043 | 1.38 | 10000 | 0.3175 | 0.3378 |
| 0.5043 | 1.43 | 10400 | 0.3265 | 0.3501 |
| 0.4968 | 1.49 | 10800 | 0.3539 | 0.3703 |
| 0.5102 | 1.54 | 11200 | 0.3323 | 0.3506 |
| 0.5008 | 1.6 | 11600 | 0.3188 | 0.3433 |
| 0.4996 | 1.65 | 12000 | 0.3162 | 0.3388 |
| 0.4996 | 1.71 | 12400 | 0.3353 | 0.3552 |
| 0.5007 | 1.76 | 12800 | 0.3152 | 0.3317 |
| 0.4956 | 1.82 | 13200 | 0.3207 | 0.3430 |
| 0.5205 | 1.87 | 13600 | 0.3239 | 0.3430 |
| 0.4829 | 1.93 | 14000 | 0.3134 | 0.3266 |
| 0.4829 | 1.98 | 14400 | 0.3039 | 0.3291 |
| 0.5251 | 2.04 | 14800 | 0.2944 | 0.3169 |
| 0.4872 | 2.09 | 15200 | 0.3061 | 0.3228 |
| 0.4805 | 2.15 | 15600 | 0.3034 | 0.3152 |
| 0.4949 | 2.2 | 16000 | 0.2896 | 0.3066 |
| 0.4949 | 2.26 | 16400 | 0.3059 | 0.3344 |
| 0.468 | 2.31 | 16800 | 0.2932 | 0.3111 |
| 0.4637 | 2.37 | 17200 | 0.2890 | 0.3074 |
| 0.4638 | 2.42 | 17600 | 0.2893 | 0.3112 |
| 0.4728 | 2.48 | 18000 | 0.2832 | 0.3013 |
| 0.4728 | 2.54 | 18400 | 0.2921 | 0.3065 |
| 0.456 | 2.59 | 18800 | 0.2961 | 0.3104 |
| 0.4628 | 2.65 | 19200 | 0.2886 | 0.3109 |
| 0.4534 | 2.7 | 19600 | 0.2828 | 0.3020 |
| 0.4578 | 2.76 | 20000 | 0.2805 | 0.3026 |
| 0.4578 | 2.81 | 20400 | 0.2796 | 0.2987 |
| 0.4702 | 2.87 | 20800 | 0.2748 | 0.2906 |
| 0.4487 | 2.92 | 21200 | 0.2819 | 0.3008 |
| 0.4411 | 2.98 | 21600 | 0.2722 | 0.2868 |
| 0.4631 | 3.03 | 22000 | 0.2814 | 0.2974 |
| 0.4631 | 3.09 | 22400 | 0.2762 | 0.2894 |
| 0.4591 | 3.14 | 22800 | 0.2802 | 0.2980 |
| 0.4349 | 3.2 | 23200 | 0.2748 | 0.2951 |
| 0.4339 | 3.25 | 23600 | 0.2792 | 0.2927 |
| 0.4254 | 3.31 | 24000 | 0.2712 | 0.2911 |
| 0.4254 | 3.36 | 24400 | 0.2719 | 0.2892 |
| 0.4317 | 3.42 | 24800 | 0.2686 | 0.2861 |
| 0.4282 | 3.47 | 25200 | 0.2632 | 0.2861 |
| 0.4262 | 3.53 | 25600 | 0.2633 | 0.2817 |
| 0.4162 | 3.58 | 26000 | 0.2561 | 0.2765 |
| 0.4162 | 3.64 | 26400 | 0.2613 | 0.2847 |
| 0.414 | 3.69 | 26800 | 0.2679 | 0.2824 |
| 0.4132 | 3.75 | 27200 | 0.2569 | 0.2813 |
| 0.405 | 3.8 | 27600 | 0.2589 | 0.2785 |
| 0.4128 | 3.86 | 28000 | 0.2611 | 0.2714 |
| 0.4128 | 3.91 | 28400 | 0.2548 | 0.2731 |
| 0.4174 | 3.97 | 28800 | 0.2574 | 0.2716 |
| 0.421 | 4.02 | 29200 | 0.2529 | 0.2700 |
| 0.4109 | 4.08 | 29600 | 0.2547 | 0.2682 |
| 0.4027 | 4.13 | 30000 | 0.2578 | 0.2758 |
| 0.4027 | 4.19 | 30400 | 0.2511 | 0.2715 |
| 0.4075 | 4.24 | 30800 | 0.2507 | 0.2601 |
| 0.3947 | 4.3 | 31200 | 0.2552 | 0.2711 |
| 0.4042 | 4.35 | 31600 | 0.2530 | 0.2695 |
| 0.3907 | 4.41 | 32000 | 0.2543 | 0.2738 |
| 0.3907 | 4.46 | 32400 | 0.2491 | 0.2629 |
| 0.3895 | 4.52 | 32800 | 0.2471 | 0.2611 |
| 0.3901 | 4.57 | 33200 | 0.2404 | 0.2559 |
| 0.3818 | 4.63 | 33600 | 0.2378 | 0.2583 |
| 0.3831 | 4.68 | 34000 | 0.2341 | 0.2499 |
| 0.3831 | 4.74 | 34400 | 0.2379 | 0.2560 |
| 0.3808 | 4.79 | 34800 | 0.2418 | 0.2553 |
| 0.4015 | 4.85 | 35200 | 0.2378 | 0.2565 |
| 0.407 | 4.9 | 35600 | 0.2375 | 0.2535 |
| 0.38 | 4.96 | 36000 | 0.2329 | 0.2451 |
| 0.38 | 5.02 | 36400 | 0.2541 | 0.2737 |
| 0.3753 | 5.07 | 36800 | 0.2475 | 0.2580 |
| 0.3701 | 5.13 | 37200 | 0.2356 | 0.2484 |
| 0.3627 | 5.18 | 37600 | 0.2422 | 0.2552 |
| 0.3652 | 5.24 | 38000 | 0.2353 | 0.2518 |
| 0.3652 | 5.29 | 38400 | 0.2328 | 0.2452 |
| 0.3667 | 5.35 | 38800 | 0.2358 | 0.2478 |
| 0.3711 | 5.4 | 39200 | 0.2340 | 0.2463 |
| 0.361 | 5.46 | 39600 | 0.2375 | 0.2452 |
| 0.3655 | 5.51 | 40000 | 0.2292 | 0.2387 |
| 0.3655 | 5.57 | 40400 | 0.2330 | 0.2432 |
| 0.3637 | 5.62 | 40800 | 0.2242 | 0.2396 |
| 0.3516 | 5.68 | 41200 | 0.2284 | 0.2394 |
| 0.3498 | 5.73 | 41600 | 0.2254 | 0.2343 |
| 0.3626 | 5.79 | 42000 | 0.2191 | 0.2318 |
| 0.3626 | 5.84 | 42400 | 0.2261 | 0.2399 |
| 0.3719 | 5.9 | 42800 | 0.2261 | 0.2411 |
| 0.3563 | 5.95 | 43200 | 0.2259 | 0.2416 |
| 0.3574 | 6.01 | 43600 | 0.2148 | 0.2249 |
| 0.3339 | 6.06 | 44000 | 0.2173 | 0.2237 |
| 0.3339 | 6.12 | 44400 | 0.2133 | 0.2238 |
| 0.3303 | 6.17 | 44800 | 0.2193 | 0.2297 |
| 0.331 | 6.23 | 45200 | 0.2122 | 0.2205 |
| 0.3372 | 6.28 | 45600 | 0.2083 | 0.2215 |
| 0.3427 | 6.34 | 46000 | 0.2079 | 0.2163 |
| 0.3427 | 6.39 | 46400 | 0.2072 | 0.2154 |
| 0.3215 | 6.45 | 46800 | 0.2067 | 0.2170 |
| 0.3246 | 6.5 | 47200 | 0.2089 | 0.2183 |
| 0.3217 | 6.56 | 47600 | 0.2030 | 0.2130 |
| 0.3309 | 6.61 | 48000 | 0.2020 | 0.2123 |
| 0.3309 | 6.67 | 48400 | 0.2054 | 0.2133 |
| 0.3343 | 6.72 | 48800 | 0.2013 | 0.2128 |
| 0.3213 | 6.78 | 49200 | 0.1971 | 0.2064 |
| 0.3145 | 6.83 | 49600 | 0.2029 | 0.2107 |
| 0.3274 | 6.89 | 50000 | 0.2038 | 0.2136 |
| 0.3274 | 6.94 | 50400 | 0.1991 | 0.2064 |
| 0.3202 | 7.0 | 50800 | 0.1970 | 0.2083 |
| 0.314 | 7.05 | 51200 | 0.1970 | 0.2035 |
| 0.3031 | 7.11 | 51600 | 0.1943 | 0.2053 |
| 0.3004 | 7.16 | 52000 | 0.1942 | 0.1985 |
| 0.3004 | 7.22 | 52400 | 0.1941 | 0.2003 |
| 0.3029 | 7.27 | 52800 | 0.1936 | 0.2008 |
| 0.2915 | 7.33 | 53200 | 0.1935 | 0.1995 |
| 0.3005 | 7.38 | 53600 | 0.1943 | 0.2032 |
| 0.2984 | 7.44 | 54000 | 0.1913 | 0.1978 |
| 0.2984 | 7.5 | 54400 | 0.1907 | 0.1965 |
| 0.2978 | 7.55 | 54800 | 0.1881 | 0.1958 |
| 0.2944 | 7.61 | 55200 | 0.1887 | 0.1966 |
| 0.3004 | 7.66 | 55600 | 0.1870 | 0.1930 |
| 0.3099 | 7.72 | 56000 | 0.1906 | 0.1976 |
| 0.3099 | 7.77 | 56400 | 0.1856 | 0.1939 |
| 0.2917 | 7.83 | 56800 | 0.1883 | 0.1961 |
| 0.2924 | 7.88 | 57200 | 0.1864 | 0.1930 |
| 0.3061 | 7.94 | 57600 | 0.1831 | 0.1872 |
| 0.2834 | 7.99 | 58000 | 0.1835 | 0.1896 |
| 0.2834 | 8.05 | 58400 | 0.1828 | 0.1875 |
| 0.2807 | 8.1 | 58800 | 0.1820 | 0.1874 |
| 0.2765 | 8.16 | 59200 | 0.1807 | 0.1869 |
| 0.2737 | 8.21 | 59600 | 0.1810 | 0.1848 |
| 0.2722 | 8.27 | 60000 | 0.1795 | 0.1829 |
| 0.2722 | 8.32 | 60400 | 0.1785 | 0.1826 |
| 0.272 | 8.38 | 60800 | 0.1802 | 0.1836 |
| 0.268 | 8.43 | 61200 | 0.1771 | 0.1813 |
| 0.2695 | 8.49 | 61600 | 0.1773 | 0.1821 |
| 0.2686 | 8.54 | 62000 | 0.1756 | 0.1814 |
| 0.2686 | 8.6 | 62400 | 0.1740 | 0.1770 |
| 0.2687 | 8.65 | 62800 | 0.1748 | 0.1769 |
| 0.2686 | 8.71 | 63200 | 0.1734 | 0.1766 |
| 0.2683 | 8.76 | 63600 | 0.1722 | 0.1759 |
| 0.2686 | 8.82 | 64000 | 0.1719 | 0.1760 |
| 0.2686 | 8.87 | 64400 | 0.1720 | 0.1743 |
| 0.2626 | 8.93 | 64800 | 0.1696 | 0.1742 |
| 0.2587 | 8.98 | 65200 | 0.1690 | 0.1718 |
| 0.2554 | 9.04 | 65600 | 0.1704 | 0.1722 |
| 0.2537 | 9.09 | 66000 | 0.1702 | 0.1721 |
| 0.2537 | 9.15 | 66400 | 0.1696 | 0.1717 |
| 0.2511 | 9.2 | 66800 | 0.1685 | 0.1701 |
| 0.2473 | 9.26 | 67200 | 0.1696 | 0.1704 |
| 0.2458 | 9.31 | 67600 | 0.1686 | 0.1698 |
| 0.2476 | 9.37 | 68000 | 0.1675 | 0.1687 |
| 0.2476 | 9.42 | 68400 | 0.1659 | 0.1673 |
| 0.2463 | 9.48 | 68800 | 0.1664 | 0.1674 |
| 0.2481 | 9.53 | 69200 | 0.1661 | 0.1670 |
| 0.2411 | 9.59 | 69600 | 0.1658 | 0.1663 |
| 0.2445 | 9.64 | 70000 | 0.1652 | 0.1660 |
| 0.2445 | 9.7 | 70400 | 0.1646 | 0.1654 |
| 0.2407 | 9.75 | 70800 | 0.1646 | 0.1641 |
| 0.2483 | 9.81 | 71200 | 0.1641 | 0.1641 |
| 0.245 | 9.86 | 71600 | 0.1635 | 0.1643 |
| 0.2402 | 9.92 | 72000 | 0.1638 | 0.1634 |
| 0.2402 | 9.98 | 72400 | 0.1633 | 0.1636 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": "it", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "XLS-R-300m - Italian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "it"}, "metrics": [{"type": "wer", "value": 17.17, "name": "Test WER"}, {"type": "cer", "value": 4.27, "name": "Test CER"}, {"type": "wer", "value": 12.07, "name": "Test WER (+LM)"}, {"type": "cer", "value": 3.52, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "it"}, "metrics": [{"type": "wer", "value": 24.29, "name": "Test WER"}, {"type": "cer", "value": 8.1, "name": "Test CER"}, {"type": "wer", "value": 17.36, "name": "Test WER (+LM)"}, {"type": "cer", "value": 7.94, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "it"}, "metrics": [{"type": "wer", "value": 33.66, "name": "Test WER"}]}]}]}
|
dbdmg/wav2vec2-xls-r-300m-italian-robust
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"it"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #it #dataset-mozilla-foundation/common_voice_7_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
wav2vec2-xls-r-300m-italian-robust
==================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the Italian splits of the following datasets:
* Mozilla Foundation Common Voice V7 dataset
* LibriSpeech multilingual
* TED multilingual
* Voxforge
* M-AILABS Speech Dataset
* EuroParl-ST
* EMOVO
* MSPKA
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #it #dataset-mozilla-foundation/common_voice_7_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-italian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - IT dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.1710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.04 | 100 | inf | 1.0 |
| No log | 0.09 | 200 | inf | 0.9983 |
| No log | 0.13 | 300 | inf | 0.7672 |
| No log | 0.18 | 400 | inf | 0.6919 |
| 2.9929 | 0.22 | 500 | inf | 0.6266 |
| 2.9929 | 0.26 | 600 | inf | 0.5513 |
| 2.9929 | 0.31 | 700 | inf | 0.5081 |
| 2.9929 | 0.35 | 800 | inf | 0.4945 |
| 2.9929 | 0.39 | 900 | inf | 0.4720 |
| 0.5311 | 0.44 | 1000 | inf | 0.4387 |
| 0.5311 | 0.48 | 1100 | inf | 0.4411 |
| 0.5311 | 0.53 | 1200 | inf | 0.4429 |
| 0.5311 | 0.57 | 1300 | inf | 0.4322 |
| 0.5311 | 0.61 | 1400 | inf | 0.4532 |
| 0.4654 | 0.66 | 1500 | inf | 0.4492 |
| 0.4654 | 0.7 | 1600 | inf | 0.3879 |
| 0.4654 | 0.75 | 1700 | inf | 0.3836 |
| 0.4654 | 0.79 | 1800 | inf | 0.3743 |
| 0.4654 | 0.83 | 1900 | inf | 0.3687 |
| 0.4254 | 0.88 | 2000 | inf | 0.3793 |
| 0.4254 | 0.92 | 2100 | inf | 0.3766 |
| 0.4254 | 0.97 | 2200 | inf | 0.3705 |
| 0.4254 | 1.01 | 2300 | inf | 0.3272 |
| 0.4254 | 1.05 | 2400 | inf | 0.3185 |
| 0.3997 | 1.1 | 2500 | inf | 0.3244 |
| 0.3997 | 1.14 | 2600 | inf | 0.3082 |
| 0.3997 | 1.18 | 2700 | inf | 0.3040 |
| 0.3997 | 1.23 | 2800 | inf | 0.3028 |
| 0.3997 | 1.27 | 2900 | inf | 0.3112 |
| 0.3668 | 1.32 | 3000 | inf | 0.3110 |
| 0.3668 | 1.36 | 3100 | inf | 0.3067 |
| 0.3668 | 1.4 | 3200 | inf | 0.2961 |
| 0.3668 | 1.45 | 3300 | inf | 0.3081 |
| 0.3668 | 1.49 | 3400 | inf | 0.2936 |
| 0.3645 | 1.54 | 3500 | inf | 0.3037 |
| 0.3645 | 1.58 | 3600 | inf | 0.2974 |
| 0.3645 | 1.62 | 3700 | inf | 0.3010 |
| 0.3645 | 1.67 | 3800 | inf | 0.2985 |
| 0.3645 | 1.71 | 3900 | inf | 0.2976 |
| 0.3624 | 1.76 | 4000 | inf | 0.2928 |
| 0.3624 | 1.8 | 4100 | inf | 0.2860 |
| 0.3624 | 1.84 | 4200 | inf | 0.2922 |
| 0.3624 | 1.89 | 4300 | inf | 0.2866 |
| 0.3624 | 1.93 | 4400 | inf | 0.2776 |
| 0.3527 | 1.97 | 4500 | inf | 0.2792 |
| 0.3527 | 2.02 | 4600 | inf | 0.2858 |
| 0.3527 | 2.06 | 4700 | inf | 0.2767 |
| 0.3527 | 2.11 | 4800 | inf | 0.2824 |
| 0.3527 | 2.15 | 4900 | inf | 0.2799 |
| 0.3162 | 2.19 | 5000 | inf | 0.2673 |
| 0.3162 | 2.24 | 5100 | inf | 0.2962 |
| 0.3162 | 2.28 | 5200 | inf | 0.2736 |
| 0.3162 | 2.33 | 5300 | inf | 0.2652 |
| 0.3162 | 2.37 | 5400 | inf | 0.2551 |
| 0.3063 | 2.41 | 5500 | inf | 0.2680 |
| 0.3063 | 2.46 | 5600 | inf | 0.2558 |
| 0.3063 | 2.5 | 5700 | inf | 0.2598 |
| 0.3063 | 2.54 | 5800 | inf | 0.2518 |
| 0.3063 | 2.59 | 5900 | inf | 0.2541 |
| 0.2913 | 2.63 | 6000 | inf | 0.2507 |
| 0.2913 | 2.68 | 6100 | inf | 0.2500 |
| 0.2913 | 2.72 | 6200 | inf | 0.2435 |
| 0.2913 | 2.76 | 6300 | inf | 0.2376 |
| 0.2913 | 2.81 | 6400 | inf | 0.2348 |
| 0.2797 | 2.85 | 6500 | inf | 0.2512 |
| 0.2797 | 2.9 | 6600 | inf | 0.2382 |
| 0.2797 | 2.94 | 6700 | inf | 0.2523 |
| 0.2797 | 2.98 | 6800 | inf | 0.2522 |
| 0.2797 | 3.03 | 6900 | inf | 0.2409 |
| 0.2766 | 3.07 | 7000 | inf | 0.2453 |
| 0.2766 | 3.12 | 7100 | inf | 0.2326 |
| 0.2766 | 3.16 | 7200 | inf | 0.2286 |
| 0.2766 | 3.2 | 7300 | inf | 0.2342 |
| 0.2766 | 3.25 | 7400 | inf | 0.2305 |
| 0.2468 | 3.29 | 7500 | inf | 0.2238 |
| 0.2468 | 3.33 | 7600 | inf | 0.2321 |
| 0.2468 | 3.38 | 7700 | inf | 0.2305 |
| 0.2468 | 3.42 | 7800 | inf | 0.2174 |
| 0.2468 | 3.47 | 7900 | inf | 0.2201 |
| 0.2439 | 3.51 | 8000 | inf | 0.2133 |
| 0.2439 | 3.55 | 8100 | inf | 0.2217 |
| 0.2439 | 3.6 | 8200 | inf | 0.2189 |
| 0.2439 | 3.64 | 8300 | inf | 0.2105 |
| 0.2439 | 3.69 | 8400 | inf | 0.2118 |
| 0.2357 | 3.73 | 8500 | inf | 0.2093 |
| 0.2357 | 3.77 | 8600 | inf | 0.2103 |
| 0.2357 | 3.82 | 8700 | inf | 0.2035 |
| 0.2357 | 3.86 | 8800 | inf | 0.2019 |
| 0.2357 | 3.91 | 8900 | inf | 0.2032 |
| 0.2217 | 3.95 | 9000 | inf | 0.2056 |
| 0.2217 | 3.99 | 9100 | inf | 0.2022 |
| 0.2217 | 4.04 | 9200 | inf | 0.1932 |
| 0.2217 | 4.08 | 9300 | inf | 0.1935 |
| 0.2217 | 4.12 | 9400 | inf | 0.1906 |
| 0.2025 | 4.17 | 9500 | inf | 0.1879 |
| 0.2025 | 4.21 | 9600 | inf | 0.1882 |
| 0.2025 | 4.26 | 9700 | inf | 0.1854 |
| 0.2025 | 4.3 | 9800 | inf | 0.1865 |
| 0.2025 | 4.34 | 9900 | inf | 0.1844 |
| 0.1869 | 4.39 | 10000 | inf | 0.1822 |
| 0.1869 | 4.43 | 10100 | inf | 0.1815 |
| 0.1869 | 4.48 | 10200 | inf | 0.1812 |
| 0.1869 | 4.52 | 10300 | inf | 0.1792 |
| 0.1869 | 4.56 | 10400 | inf | 0.1797 |
| 0.1863 | 4.61 | 10500 | inf | 0.1774 |
| 0.1863 | 4.65 | 10600 | inf | 0.1767 |
| 0.1863 | 4.7 | 10700 | inf | 0.1765 |
| 0.1863 | 4.74 | 10800 | inf | 0.1753 |
| 0.1863 | 4.78 | 10900 | inf | 0.1731 |
| 0.178 | 4.83 | 11000 | inf | 0.1727 |
| 0.178 | 4.87 | 11100 | inf | 0.1724 |
| 0.178 | 4.91 | 11200 | inf | 0.1722 |
| 0.178 | 4.96 | 11300 | inf | 0.1712 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["it"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "XLS-R-300m - Italian", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "it"}, "metrics": [{"type": "wer", "value": 19.44, "name": "Test WER"}, {"type": "cer", "value": 4.47, "name": "Test CER"}, {"type": "wer", "value": 14.08, "name": "Test WER (+LM)"}, {"type": "cer", "value": 3.67, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "it"}, "metrics": [{"type": "wer", "value": 31.01, "name": "Test WER"}, {"type": "cer", "value": 9.27, "name": "Test CER"}, {"type": "wer", "value": 22.09, "name": "Test WER (+LM)"}, {"type": "cer", "value": 7.9, "name": "Test CER (+LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "it"}, "metrics": [{"type": "wer", "value": 38.07, "name": "Test WER"}]}]}]}
|
dbdmg/wav2vec2-xls-r-300m-italian
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_7_0",
"robust-speech-event",
"it",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"it"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #it #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
wav2vec2-xls-r-300m-italian
===========================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - IT dataset.
It achieves the following results on the evaluation set:
* Loss: inf
* Wer: 0.1710
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 64
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 5.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #it #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
# algebra_linear_1d
---
language: en
datasets:
- algebra_linear_1d
---
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/algebra_linear_1d](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetalgebra_linear_1d_default_config) for solving **algebra 1d equations** mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/algebra_linear_1d")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/algebra_linear_1d")
```
You can then use this model to solve algebra 1d equations into numbers.
```python
query = "Solve 0 = 1026*x - 2474 + 46592 for x"
input_text = f"{query} </s>"
features = tokenizer([input_text], return_tensors='pt')
model.to('cuda')
output = model.generate(input_ids=features['input_ids'].cuda(),
attention_mask=features['attention_mask'].cuda())
tokenizer.decode(output[0])
# <pad> -41</s>
```
Another examples:
+ Solve 1112*r + 1418*r - 5220 = 587*r - 28536 for r.
+ Answer: -12 Pred: -12
----
+ Solve -119*k + 6*k - 117 - 352 = 322 for k.
+ Answer: -7 Pred: -7
----
+ Solve -547 = -62*t + 437 - 798 for t.
+ Answer: 3 Pred: 3
----
+ Solve 3*j - 3*j + 0*j - 4802 = 98*j for j.
+ Answer: -49 Pred: -49
----
+ Solve 3047*n - 6130*n - 1700 = -3049*n for n.
+ Answer: -50 Pred: -50
----
+ Solve 121*i + 1690 = 76*i - 128*i + 133 for i.
+ Answer: -9 Pred: -9
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
{}
|
dbernsohn/algebra_linear_1d
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# algebra_linear_1d
---
language: en
datasets:
- algebra_linear_1d
---
This is a t5-small fine-tuned version on the math_dataset/algebra_linear_1d for solving algebra 1d equations mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
You can then use this model to solve algebra 1d equations into numbers.
Another examples:
+ Solve 1112*r + 1418*r - 5220 = 587*r - 28536 for r.
+ Answer: -12 Pred: -12
----
+ Solve -119*k + 6*k - 117 - 352 = 322 for k.
+ Answer: -7 Pred: -7
----
+ Solve -547 = -62*t + 437 - 798 for t.
+ Answer: 3 Pred: 3
----
+ Solve 3*j - 3*j + 0*j - 4802 = 98*j for j.
+ Answer: -49 Pred: -49
----
+ Solve 3047*n - 6130*n - 1700 = -3049*n for n.
+ Answer: -50 Pred: -50
----
+ Solve 121*i + 1690 = 76*i - 128*i + 133 for i.
+ Answer: -9 Pred: -9
The whole training process and hyperparameters are in my GitHub repo
> Created by Dor Bernsohn
|
[
"# algebra_linear_1d\n---\nlanguage: en\ndatasets:\n- algebra_linear_1d\n---\n\nThis is a t5-small fine-tuned version on the math_dataset/algebra_linear_1d for solving algebra 1d equations mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to solve algebra 1d equations into numbers.\n\n\n\nAnother examples:\n\n+ Solve 1112*r + 1418*r - 5220 = 587*r - 28536 for r. \n+ Answer: -12 Pred: -12\n----\n+ Solve -119*k + 6*k - 117 - 352 = 322 for k. \n+ Answer: -7 Pred: -7\n----\n+ Solve -547 = -62*t + 437 - 798 for t. \n+ Answer: 3 Pred: 3\n----\n+ Solve 3*j - 3*j + 0*j - 4802 = 98*j for j. \n+ Answer: -49 Pred: -49\n----\n+ Solve 3047*n - 6130*n - 1700 = -3049*n for n. \n+ Answer: -50 Pred: -50\n----\n+ Solve 121*i + 1690 = 76*i - 128*i + 133 for i. \n+ Answer: -9 Pred: -9\n\nThe whole training process and hyperparameters are in my GitHub repo\n> Created by Dor Bernsohn"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# algebra_linear_1d\n---\nlanguage: en\ndatasets:\n- algebra_linear_1d\n---\n\nThis is a t5-small fine-tuned version on the math_dataset/algebra_linear_1d for solving algebra 1d equations mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to solve algebra 1d equations into numbers.\n\n\n\nAnother examples:\n\n+ Solve 1112*r + 1418*r - 5220 = 587*r - 28536 for r. \n+ Answer: -12 Pred: -12\n----\n+ Solve -119*k + 6*k - 117 - 352 = 322 for k. \n+ Answer: -7 Pred: -7\n----\n+ Solve -547 = -62*t + 437 - 798 for t. \n+ Answer: 3 Pred: 3\n----\n+ Solve 3*j - 3*j + 0*j - 4802 = 98*j for j. \n+ Answer: -49 Pred: -49\n----\n+ Solve 3047*n - 6130*n - 1700 = -3049*n for n. \n+ Answer: -50 Pred: -50\n----\n+ Solve 121*i + 1690 = 76*i - 128*i + 133 for i. \n+ Answer: -9 Pred: -9\n\nThe whole training process and hyperparameters are in my GitHub repo\n> Created by Dor Bernsohn"
] |
text2text-generation
|
transformers
|
# algebra_linear_1d_composed
---
language: en
datasets:
- algebra_linear_1d_composed
---
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/algebra_linear_1d_composed](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetalgebra_linear_1d_composed) for solving **algebra linear 1d composed equations** mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/algebra_linear_1d_composed")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/algebra_linear_1d_composed")
```
You can then use this model to solve algebra 1d equations into numbers.
```python
query = "Suppose -d = 5 - 16. Let b = -579 + 584. Solve -b*c + 36 = d for c."
input_text = f"{query} </s>"
features = tokenizer([input_text], return_tensors='pt')
model.to('cuda')
output = model.generate(input_ids=features['input_ids'].cuda(),
attention_mask=features['attention_mask'].cuda())
tokenizer.decode(output[0])
# <pad> 5</s>
```
Another examples:
+ Suppose -d = 5 - 16. Let b = -579 + 584. Solve -b*c + 36 = d for c.
+ Answer: 5 Pred: 5
----
+ Suppose 3*v - l + 9 = 4*v, 0 = -5*v + 5*l - 5. Let f(s) = 3*s**2 + 1. Let g be f(-1). Suppose 63 = g*x - x. Solve -5*i + v + x = 0 for i.
+ Answer: 5 Pred: 5
----
+ Let w be 2 - (0 - 0)/(-2). Let f = -110 - -110. Suppose f*m - 4*m + 3*m = 0. Solve m*v = -w*v for v.
+ Answer: 0 Pred: 0
----
+ Let a(h) = -34*h**3 - 15 + 3*h + 36*h**3 + 8*h**2 + 5*h**2. Let r be a(-6). Solve 2*z = r*z for z.
+ Answer: 0 Pred: 0
----
+ Suppose -3*p + 24 = -3*c, 0*c + 6 = -2*c. Suppose -67 = 4*i + 289. Let t = i + 94. Solve t = 2*y - p for y.
+ Answer: 5 Pred: 5
----
+ Let b = -36 + 53. Suppose -7*u - b = -73. Solve j + 3*j = -u for j.
+ Answer: -2 Pred: -2
----
+ Let h be 8*((-2)/2 + 14)*1. Let y = -101 + h. Solve y*p = -p for p.
+ Answer: 0 Pred: 0
----
+ Let b = 178 - 79. Let s be 9/(-1 - 2 - b/(-22)). Solve s = -k - k for k.
+ Answer: -3 Pred: -3
----
+ Suppose 31 = -4*z + 11, -3*k - 5*z - 22 = 0. Solve 23 = -11*p + k for p.
+ Answer: -2 Pred: -2
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
{}
|
dbernsohn/algebra_linear_1d_composed
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# algebra_linear_1d_composed
---
language: en
datasets:
- algebra_linear_1d_composed
---
This is a t5-small fine-tuned version on the math_dataset/algebra_linear_1d_composed for solving algebra linear 1d composed equations mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
You can then use this model to solve algebra 1d equations into numbers.
Another examples:
+ Suppose -d = 5 - 16. Let b = -579 + 584. Solve -b*c + 36 = d for c.
+ Answer: 5 Pred: 5
----
+ Suppose 3*v - l + 9 = 4*v, 0 = -5*v + 5*l - 5. Let f(s) = 3*s2 + 1. Let g be f(-1). Suppose 63 = g*x - x. Solve -5*i + v + x = 0 for i.
+ Answer: 5 Pred: 5
----
+ Let w be 2 - (0 - 0)/(-2). Let f = -110 - -110. Suppose f*m - 4*m + 3*m = 0. Solve m*v = -w*v for v.
+ Answer: 0 Pred: 0
----
+ Let a(h) = -34*h3 - 15 + 3*h + 36*h3 + 8*h2 + 5*h2. Let r be a(-6). Solve 2*z = r*z for z.
+ Answer: 0 Pred: 0
----
+ Suppose -3*p + 24 = -3*c, 0*c + 6 = -2*c. Suppose -67 = 4*i + 289. Let t = i + 94. Solve t = 2*y - p for y.
+ Answer: 5 Pred: 5
----
+ Let b = -36 + 53. Suppose -7*u - b = -73. Solve j + 3*j = -u for j.
+ Answer: -2 Pred: -2
----
+ Let h be 8*((-2)/2 + 14)*1. Let y = -101 + h. Solve y*p = -p for p.
+ Answer: 0 Pred: 0
----
+ Let b = 178 - 79. Let s be 9/(-1 - 2 - b/(-22)). Solve s = -k - k for k.
+ Answer: -3 Pred: -3
----
+ Suppose 31 = -4*z + 11, -3*k - 5*z - 22 = 0. Solve 23 = -11*p + k for p.
+ Answer: -2 Pred: -2
The whole training process and hyperparameters are in my GitHub repo
> Created by Dor Bernsohn
|
[
"# algebra_linear_1d_composed\n---\nlanguage: en\ndatasets:\n- algebra_linear_1d_composed\n---\n\nThis is a t5-small fine-tuned version on the math_dataset/algebra_linear_1d_composed for solving algebra linear 1d composed equations mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to solve algebra 1d equations into numbers.\n\n\n\nAnother examples:\n\n+ Suppose -d = 5 - 16. Let b = -579 + 584. Solve -b*c + 36 = d for c.\n+ Answer: 5 Pred: 5\n----\n+ Suppose 3*v - l + 9 = 4*v, 0 = -5*v + 5*l - 5. Let f(s) = 3*s2 + 1. Let g be f(-1). Suppose 63 = g*x - x. Solve -5*i + v + x = 0 for i.\n+ Answer: 5 Pred: 5\n----\n+ Let w be 2 - (0 - 0)/(-2). Let f = -110 - -110. Suppose f*m - 4*m + 3*m = 0. Solve m*v = -w*v for v.\n+ Answer: 0 Pred: 0\n----\n+ Let a(h) = -34*h3 - 15 + 3*h + 36*h3 + 8*h2 + 5*h2. Let r be a(-6). Solve 2*z = r*z for z.\n+ Answer: 0 Pred: 0\n----\n+ Suppose -3*p + 24 = -3*c, 0*c + 6 = -2*c. Suppose -67 = 4*i + 289. Let t = i + 94. Solve t = 2*y - p for y.\n+ Answer: 5 Pred: 5\n----\n+ Let b = -36 + 53. Suppose -7*u - b = -73. Solve j + 3*j = -u for j.\n+ Answer: -2 Pred: -2\n----\n+ Let h be 8*((-2)/2 + 14)*1. Let y = -101 + h. Solve y*p = -p for p.\n+ Answer: 0 Pred: 0\n----\n+ Let b = 178 - 79. Let s be 9/(-1 - 2 - b/(-22)). Solve s = -k - k for k.\n+ Answer: -3 Pred: -3\n----\n+ Suppose 31 = -4*z + 11, -3*k - 5*z - 22 = 0. Solve 23 = -11*p + k for p.\n+ Answer: -2 Pred: -2\n\nThe whole training process and hyperparameters are in my GitHub repo\n> Created by Dor Bernsohn"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# algebra_linear_1d_composed\n---\nlanguage: en\ndatasets:\n- algebra_linear_1d_composed\n---\n\nThis is a t5-small fine-tuned version on the math_dataset/algebra_linear_1d_composed for solving algebra linear 1d composed equations mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to solve algebra 1d equations into numbers.\n\n\n\nAnother examples:\n\n+ Suppose -d = 5 - 16. Let b = -579 + 584. Solve -b*c + 36 = d for c.\n+ Answer: 5 Pred: 5\n----\n+ Suppose 3*v - l + 9 = 4*v, 0 = -5*v + 5*l - 5. Let f(s) = 3*s2 + 1. Let g be f(-1). Suppose 63 = g*x - x. Solve -5*i + v + x = 0 for i.\n+ Answer: 5 Pred: 5\n----\n+ Let w be 2 - (0 - 0)/(-2). Let f = -110 - -110. Suppose f*m - 4*m + 3*m = 0. Solve m*v = -w*v for v.\n+ Answer: 0 Pred: 0\n----\n+ Let a(h) = -34*h3 - 15 + 3*h + 36*h3 + 8*h2 + 5*h2. Let r be a(-6). Solve 2*z = r*z for z.\n+ Answer: 0 Pred: 0\n----\n+ Suppose -3*p + 24 = -3*c, 0*c + 6 = -2*c. Suppose -67 = 4*i + 289. Let t = i + 94. Solve t = 2*y - p for y.\n+ Answer: 5 Pred: 5\n----\n+ Let b = -36 + 53. Suppose -7*u - b = -73. Solve j + 3*j = -u for j.\n+ Answer: -2 Pred: -2\n----\n+ Let h be 8*((-2)/2 + 14)*1. Let y = -101 + h. Solve y*p = -p for p.\n+ Answer: 0 Pred: 0\n----\n+ Let b = 178 - 79. Let s be 9/(-1 - 2 - b/(-22)). Solve s = -k - k for k.\n+ Answer: -3 Pred: -3\n----\n+ Suppose 31 = -4*z + 11, -3*k - 5*z - 22 = 0. Solve 23 = -11*p + k for p.\n+ Answer: -2 Pred: -2\n\nThe whole training process and hyperparameters are in my GitHub repo\n> Created by Dor Bernsohn"
] |
fill-mask
|
transformers
|
# roberta-go
---
language: Go
datasets:
- code_search_net
---
This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Golang** Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-go")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-go")
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
```
You can then use this model to fill masked words in a Java code.
```python
code = """
package main
import (
"fmt"
"runtime"
)
func main() {
fmt.Print("Go runs on ")
switch os := runtime.<mask>; os {
case "darwin":
fmt.Println("OS X.")
case "linux":
fmt.Println("Linux.")
default:
// freebsd, openbsd,
// plan9, windows...
fmt.Printf("%s.\n", os)
}
}
""".lstrip()
pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)}
sorted(pred.items(), key=lambda kv: kv[1], reverse=True)
[('GOOS', 0.11810332536697388),
('FileInfo', 0.04276798665523529),
('Stdout', 0.03572738170623779),
('Getenv', 0.025064032524824142),
('FileMode', 0.01462600938975811)]
```
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
{}
|
dbernsohn/roberta-go
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-go
---
language: Go
datasets:
- code_search_net
---
This is a roberta pre-trained version on the CodeSearchNet dataset for Golang Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
You can then use this model to fill masked words in a Java code.
The whole training process and hyperparameters are in my GitHub repo
> Created by Dor Bernsohn
|
[
"# roberta-go\n---\nlanguage: Go\ndatasets:\n- code_search_net\n---\n\nThis is a roberta pre-trained version on the CodeSearchNet dataset for Golang Mask Language Model mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to fill masked words in a Java code.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-go\n---\nlanguage: Go\ndatasets:\n- code_search_net\n---\n\nThis is a roberta pre-trained version on the CodeSearchNet dataset for Golang Mask Language Model mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to fill masked words in a Java code.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
fill-mask
|
transformers
|
# roberta-java
---
language: Java
datasets:
- code_search_net
---
This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Java** Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-java")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-java")
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
```
You can then use this model to fill masked words in a Java code.
```python
code = """
String[] cars = {"Volvo", "BMW", "Ford", "Mazda"};
for (String i : cars) {
System.out.<mask>(i);
}
""".lstrip()
pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)}
sorted(pred.items(), key=lambda kv: kv[1], reverse=True)
# [('println', 0.32571351528167725),
# ('get', 0.2897663116455078),
# ('remove', 0.0637081190943718),
# ('exit', 0.058875661343336105),
# ('print', 0.034190207719802856)]
```
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
{}
|
dbernsohn/roberta-java
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-java
---
language: Java
datasets:
- code_search_net
---
This is a roberta pre-trained version on the CodeSearchNet dataset for Java Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
You can then use this model to fill masked words in a Java code.
The whole training process and hyperparameters are in my GitHub repo
> Created by Dor Bernsohn
|
[
"# roberta-java\n---\nlanguage: Java\ndatasets:\n- code_search_net\n---\n\nThis is a roberta pre-trained version on the CodeSearchNet dataset for Java Mask Language Model mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to fill masked words in a Java code.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-java\n---\nlanguage: Java\ndatasets:\n- code_search_net\n---\n\nThis is a roberta pre-trained version on the CodeSearchNet dataset for Java Mask Language Model mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to fill masked words in a Java code.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
fill-mask
|
transformers
|
# roberta-javascript
---
language: javascript
datasets:
- code_search_net
---
This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **javascript** Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-javascript")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-javascript")
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
```
You can then use this model to fill masked words in a Java code.
```python
code = """
var i;
for (i = 0; i < cars.<mask>; i++) {
text += cars[i] + "<br>";
}
""".lstrip()
pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)}
sorted(pred.items(), key=lambda kv: kv[1], reverse=True)
# [('length', 0.9959614872932434),
# ('i', 0.00027875584783032537),
# ('len', 0.0002283261710545048),
# ('nodeType', 0.00013731322542298585),
# ('index', 7.5289819505997e-05)]
```
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
{}
|
dbernsohn/roberta-javascript
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-javascript
---
language: javascript
datasets:
- code_search_net
---
This is a roberta pre-trained version on the CodeSearchNet dataset for javascript Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
You can then use this model to fill masked words in a Java code.
The whole training process and hyperparameters are in my GitHub repo
> Created by Dor Bernsohn
|
[
"# roberta-javascript\n---\nlanguage: javascript\ndatasets:\n- code_search_net\n---\n\nThis is a roberta pre-trained version on the CodeSearchNet dataset for javascript Mask Language Model mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to fill masked words in a Java code.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-javascript\n---\nlanguage: javascript\ndatasets:\n- code_search_net\n---\n\nThis is a roberta pre-trained version on the CodeSearchNet dataset for javascript Mask Language Model mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to fill masked words in a Java code.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
fill-mask
|
transformers
|
# roberta-php
---
language: php
datasets:
- code_search_net
---
This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **php** Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-php")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-php")
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
```
You can then use this model to fill masked words in a Java code.
```python
code = """
$people = array(
array('name' => 'Kalle', 'salt' => 856412),
array('name' => 'Pierre', 'salt' => 215863)
);
for($i = 0; $i < count($<mask>); ++$i) {
$people[$i]['salt'] = mt_rand(000000, 999999);
}
""".lstrip()
pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)}
sorted(pred.items(), key=lambda kv: kv[1], reverse=True)
# [('people', 0.785636842250824),
# ('parts', 0.006270722020417452),
# ('id', 0.0035842324141412973),
# ('data', 0.0025512021966278553),
# ('config', 0.002258970635011792)]
```
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
{}
|
dbernsohn/roberta-php
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# roberta-php
---
language: php
datasets:
- code_search_net
---
This is a roberta pre-trained version on the CodeSearchNet dataset for php Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
You can then use this model to fill masked words in a Java code.
The whole training process and hyperparameters are in my GitHub repo
> Created by Dor Bernsohn
|
[
"# roberta-php\n---\nlanguage: php\ndatasets:\n- code_search_net\n---\n\nThis is a roberta pre-trained version on the CodeSearchNet dataset for php Mask Language Model mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to fill masked words in a Java code.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# roberta-php\n---\nlanguage: php\ndatasets:\n- code_search_net\n---\n\nThis is a roberta pre-trained version on the CodeSearchNet dataset for php Mask Language Model mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to fill masked words in a Java code.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
fill-mask
|
transformers
|
# roberta-python
---
language: python
datasets:
- code_search_net
---
This is a [roberta](https://arxiv.org/pdf/1907.11692.pdf) pre-trained version on the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for **Python** Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/roberta-python")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/roberta-python")
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
```
You can then use this model to fill masked words in a Python code.
```python
code = """
new_dict = {}
for k, v in my_dict.<mask>():
new_dict[k] = v**2
""".lstrip()
pred = {x["token_str"].replace("Ġ", ""): x["score"] for x in fill_mask(code)}
sorted(pred.items(), key=lambda kv: kv[1], reverse=True)
# [('items', 0.7376779913902283),
# ('keys', 0.16238391399383545),
# ('values', 0.03965481370687485),
# ('iteritems', 0.03346433863043785),
# ('splitlines', 0.0032723243348300457)]
```
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/CodeMLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
{}
|
dbernsohn/roberta-python
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"arxiv:1907.11692",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-python
---
language: python
datasets:
- code_search_net
---
This is a roberta pre-trained version on the CodeSearchNet dataset for Python Mask Language Model mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
You can then use this model to fill masked words in a Python code.
The whole training process and hyperparameters are in my GitHub repo
> Created by Dor Bernsohn
|
[
"# roberta-python\n---\nlanguage: python\ndatasets:\n- code_search_net\n---\n\nThis is a roberta pre-trained version on the CodeSearchNet dataset for Python Mask Language Model mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to fill masked words in a Python code.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #arxiv-1907.11692 #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-python\n---\nlanguage: python\ndatasets:\n- code_search_net\n---\n\nThis is a roberta pre-trained version on the CodeSearchNet dataset for Python Mask Language Model mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to fill masked words in a Python code.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
text2text-generation
|
transformers
|
# measurement_time
---
language: en
datasets:
- measurement_time
---
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/measurement_time](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetmeasurement_time) for solving **measurement time equations** mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_measurement_time")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/t5_measurement_time")
```
You can then use this model to solve algebra 1d equations into numbers.
```python
query = "How many minutes are there between 2:09 PM and 2:27 PM?"
input_text = f"{query} </s>"
features = tokenizer([input_text], return_tensors='pt')
model.to('cuda')
output = model.generate(input_ids=features['input_ids'].cuda(),
attention_mask=features['attention_mask'].cuda())
tokenizer.decode(output[0])
# <pad> 18</s>
```
Another examples:
+ How many minutes are there between 2:09 PM and 2:27 PM?
+ Answer: 18 Pred: 18
----
+ What is 116 minutes after 10:06 AM?
+ Answer: 12:02 PM Pred: 12:02 PM
----
+ What is 608 minutes after 3:14 PM?
+ Answer: 1:22 AM Pred: 1:22 AM
----
+ What is 64 minutes before 9:16 AM?
+ Answer: 8:12 AM Pred: 8:12 AM
----
+ What is 427 minutes before 4:27 AM?
+ Answer: 9:20 PM Pred: 9:20 PM
----
+ How many minutes are there between 6:36 PM and 12:15 AM?
+ Answer: 339 Pred: 339
----
+ What is 554 minutes before 5:24 PM?
+ Answer: 8:10 AM Pred: 8:10 AM
----
+ What is 307 minutes after 5:15 AM?
+ Answer: 10:22 AM Pred: 10:22 AM
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
{}
|
dbernsohn/t5_measurement_time
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# measurement_time
---
language: en
datasets:
- measurement_time
---
This is a t5-small fine-tuned version on the math_dataset/measurement_time for solving measurement time equations mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
You can then use this model to solve algebra 1d equations into numbers.
Another examples:
+ How many minutes are there between 2:09 PM and 2:27 PM?
+ Answer: 18 Pred: 18
----
+ What is 116 minutes after 10:06 AM?
+ Answer: 12:02 PM Pred: 12:02 PM
----
+ What is 608 minutes after 3:14 PM?
+ Answer: 1:22 AM Pred: 1:22 AM
----
+ What is 64 minutes before 9:16 AM?
+ Answer: 8:12 AM Pred: 8:12 AM
----
+ What is 427 minutes before 4:27 AM?
+ Answer: 9:20 PM Pred: 9:20 PM
----
+ How many minutes are there between 6:36 PM and 12:15 AM?
+ Answer: 339 Pred: 339
----
+ What is 554 minutes before 5:24 PM?
+ Answer: 8:10 AM Pred: 8:10 AM
----
+ What is 307 minutes after 5:15 AM?
+ Answer: 10:22 AM Pred: 10:22 AM
The whole training process and hyperparameters are in my GitHub repo
> Created by Dor Bernsohn
|
[
"# measurement_time\n---\nlanguage: en\ndatasets:\n- measurement_time\n---\n\nThis is a t5-small fine-tuned version on the math_dataset/measurement_time for solving measurement time equations mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to solve algebra 1d equations into numbers.\n\n\n\nAnother examples:\n\n+ How many minutes are there between 2:09 PM and 2:27 PM?\n+ Answer: 18 Pred: 18\n----\n+ What is 116 minutes after 10:06 AM?\n+ Answer: 12:02 PM Pred: 12:02 PM\n----\n+ What is 608 minutes after 3:14 PM?\n+ Answer: 1:22 AM Pred: 1:22 AM\n----\n+ What is 64 minutes before 9:16 AM?\n+ Answer: 8:12 AM Pred: 8:12 AM\n----\n+ What is 427 minutes before 4:27 AM?\n+ Answer: 9:20 PM Pred: 9:20 PM\n----\n+ How many minutes are there between 6:36 PM and 12:15 AM?\n+ Answer: 339 Pred: 339\n----\n+ What is 554 minutes before 5:24 PM?\n+ Answer: 8:10 AM Pred: 8:10 AM\n----\n+ What is 307 minutes after 5:15 AM?\n+ Answer: 10:22 AM Pred: 10:22 AM\n\nThe whole training process and hyperparameters are in my GitHub repo\n> Created by Dor Bernsohn"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# measurement_time\n---\nlanguage: en\ndatasets:\n- measurement_time\n---\n\nThis is a t5-small fine-tuned version on the math_dataset/measurement_time for solving measurement time equations mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to solve algebra 1d equations into numbers.\n\n\n\nAnother examples:\n\n+ How many minutes are there between 2:09 PM and 2:27 PM?\n+ Answer: 18 Pred: 18\n----\n+ What is 116 minutes after 10:06 AM?\n+ Answer: 12:02 PM Pred: 12:02 PM\n----\n+ What is 608 minutes after 3:14 PM?\n+ Answer: 1:22 AM Pred: 1:22 AM\n----\n+ What is 64 minutes before 9:16 AM?\n+ Answer: 8:12 AM Pred: 8:12 AM\n----\n+ What is 427 minutes before 4:27 AM?\n+ Answer: 9:20 PM Pred: 9:20 PM\n----\n+ How many minutes are there between 6:36 PM and 12:15 AM?\n+ Answer: 339 Pred: 339\n----\n+ What is 554 minutes before 5:24 PM?\n+ Answer: 8:10 AM Pred: 8:10 AM\n----\n+ What is 307 minutes after 5:15 AM?\n+ Answer: 10:22 AM Pred: 10:22 AM\n\nThe whole training process and hyperparameters are in my GitHub repo\n> Created by Dor Bernsohn"
] |
text2text-generation
|
transformers
|
# numbers_gcd
---
language: en
datasets:
- numbers_gcd
---
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [math_dataset/numbers_gcd](https://www.tensorflow.org/datasets/catalog/math_dataset#mathdatasetnumbers_gcd) for solving **greatest common divisor** mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_numbers_gcd")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/t5_numbers_gcd")
```
You can then use this model to solve algebra 1d equations into numbers.
```python
query = "What is the highest common factor of 4210884 and 72?"
input_text = f"{query} </s>"
features = tokenizer([input_text], return_tensors='pt')
model.to('cuda')
output = model.generate(input_ids=features['input_ids'].cuda(),
attention_mask=features['attention_mask'].cuda())
tokenizer.decode(output[0])
# <pad> 36</s>
```
Another examples:
+ Calculate the greatest common factor of 3470 and 97090.
+ Answer: 10 Pred: 10
----
+ Calculate the highest common factor of 3480 and 775431.
+ Answer: 87 Pred: 87
----
+ What is the highest common divisor of 26 and 88049?
+ Answer: 13 Pred: 13
----
+ Calculate the highest common factor of 1416 and 24203688.
+ Answer: 1416 Pred: 1416
----
+ Calculate the highest common divisor of 124 and 69445828.
+ Answer: 124 Pred: 124
----
+ What is the greatest common factor of 657906 and 470?
+ Answer: 94 Pred: 94
----
+ What is the highest common factor of 4210884 and 72?
+ Answer: 36 Pred: 36
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/MathLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
{}
|
dbernsohn/t5_numbers_gcd
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# numbers_gcd
---
language: en
datasets:
- numbers_gcd
---
This is a t5-small fine-tuned version on the math_dataset/numbers_gcd for solving greatest common divisor mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
You can then use this model to solve algebra 1d equations into numbers.
Another examples:
+ Calculate the greatest common factor of 3470 and 97090.
+ Answer: 10 Pred: 10
----
+ Calculate the highest common factor of 3480 and 775431.
+ Answer: 87 Pred: 87
----
+ What is the highest common divisor of 26 and 88049?
+ Answer: 13 Pred: 13
----
+ Calculate the highest common factor of 1416 and 24203688.
+ Answer: 1416 Pred: 1416
----
+ Calculate the highest common divisor of 124 and 69445828.
+ Answer: 124 Pred: 124
----
+ What is the greatest common factor of 657906 and 470?
+ Answer: 94 Pred: 94
----
+ What is the highest common factor of 4210884 and 72?
+ Answer: 36 Pred: 36
The whole training process and hyperparameters are in my GitHub repo
> Created by Dor Bernsohn
|
[
"# numbers_gcd\n---\nlanguage: en\ndatasets:\n- numbers_gcd\n---\n\nThis is a t5-small fine-tuned version on the math_dataset/numbers_gcd for solving greatest common divisor mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to solve algebra 1d equations into numbers.\n\n\n\nAnother examples:\n\n+ Calculate the greatest common factor of 3470 and 97090. \n+ Answer: 10 Pred: 10\n----\n+ Calculate the highest common factor of 3480 and 775431.\n+ Answer: 87 Pred: 87\n----\n+ What is the highest common divisor of 26 and 88049? \n+ Answer: 13 Pred: 13\n----\n+ Calculate the highest common factor of 1416 and 24203688.\n+ Answer: 1416 Pred: 1416\n----\n+ Calculate the highest common divisor of 124 and 69445828. \n+ Answer: 124 Pred: 124\n----\n+ What is the greatest common factor of 657906 and 470?\n+ Answer: 94 Pred: 94\n----\n+ What is the highest common factor of 4210884 and 72?\n+ Answer: 36 Pred: 36\n\nThe whole training process and hyperparameters are in my GitHub repo\n> Created by Dor Bernsohn"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# numbers_gcd\n---\nlanguage: en\ndatasets:\n- numbers_gcd\n---\n\nThis is a t5-small fine-tuned version on the math_dataset/numbers_gcd for solving greatest common divisor mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to solve algebra 1d equations into numbers.\n\n\n\nAnother examples:\n\n+ Calculate the greatest common factor of 3470 and 97090. \n+ Answer: 10 Pred: 10\n----\n+ Calculate the highest common factor of 3480 and 775431.\n+ Answer: 87 Pred: 87\n----\n+ What is the highest common divisor of 26 and 88049? \n+ Answer: 13 Pred: 13\n----\n+ Calculate the highest common factor of 1416 and 24203688.\n+ Answer: 1416 Pred: 1416\n----\n+ Calculate the highest common divisor of 124 and 69445828. \n+ Answer: 124 Pred: 124\n----\n+ What is the greatest common factor of 657906 and 470?\n+ Answer: 94 Pred: 94\n----\n+ What is the highest common factor of 4210884 and 72?\n+ Answer: 36 Pred: 36\n\nThe whole training process and hyperparameters are in my GitHub repo\n> Created by Dor Bernsohn"
] |
text2text-generation
|
transformers
|
# t5_wikisql_SQL2en
---
language: en
datasets:
- wikisql
---
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [wikisql dataset](https://huggingface.co/datasets/wikisql) for **SQL** to **English** **translation** text2text mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_wikisql_SQL2en")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/t5_wikisql_SQL2en")
```
You can then use this model to translate SQL queries into plain english.
```python
query = "SELECT people FROM peoples where age > 10"
input_text = f"translate SQL to English: {query} </s>"
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'].cuda(),
attention_mask=features['attention_mask'].cuda())
tokenizer.decode(output[0])
# Output: "What people are older than 10?"
```
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/SQLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
{}
|
dbernsohn/t5_wikisql_SQL2en
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# t5_wikisql_SQL2en
---
language: en
datasets:
- wikisql
---
This is a t5-small fine-tuned version on the wikisql dataset for SQL to English translation text2text mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
You can then use this model to translate SQL queries into plain english.
The whole training process and hyperparameters are in my GitHub repo
> Created by Dor Bernsohn
|
[
"# t5_wikisql_SQL2en\n---\nlanguage: en\ndatasets:\n- wikisql\n---\n\nThis is a t5-small fine-tuned version on the wikisql dataset for SQL to English translation text2text mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to translate SQL queries into plain english.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# t5_wikisql_SQL2en\n---\nlanguage: en\ndatasets:\n- wikisql\n---\n\nThis is a t5-small fine-tuned version on the wikisql dataset for SQL to English translation text2text mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to translate SQL queries into plain english.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
text2text-generation
|
transformers
|
# t5_wikisql_en2SQL
---
language: en
datasets:
- wikisql
---
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [wikisql dataset](https://huggingface.co/datasets/wikisql) for **English** to **SQL** **translation** text2text mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_wikisql_en2SQL")
model = AutoModelWithLMHead.from_pretrained("dbernsohn/t5_wikisql_en2SQL")
```
You can then use this model to translate SQL queries into plain english.
```python
query = "what are the names of all the people in the USA?"
input_text = f"translate English to Sql: {query} </s>"
features = tokenizer([input_text], return_tensors='pt')
output = model.generate(input_ids=features['input_ids'].cuda(),
attention_mask=features['attention_mask'].cuda())
tokenizer.decode(output[0])
# Output: "SELECT Name FROM table WHERE Country = USA"
```
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/CodeLM/tree/main/SQLM)
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
{}
|
dbernsohn/t5_wikisql_en2SQL
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# t5_wikisql_en2SQL
---
language: en
datasets:
- wikisql
---
This is a t5-small fine-tuned version on the wikisql dataset for English to SQL translation text2text mission.
To load the model:
(necessary packages: !pip install transformers sentencepiece)
You can then use this model to translate SQL queries into plain english.
The whole training process and hyperparameters are in my GitHub repo
> Created by Dor Bernsohn
|
[
"# t5_wikisql_en2SQL\n---\nlanguage: en\ndatasets:\n- wikisql\n---\n\nThis is a t5-small fine-tuned version on the wikisql dataset for English to SQL translation text2text mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to translate SQL queries into plain english.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# t5_wikisql_en2SQL\n---\nlanguage: en\ndatasets:\n- wikisql\n---\n\nThis is a t5-small fine-tuned version on the wikisql dataset for English to SQL translation text2text mission.\n\nTo load the model:\n(necessary packages: !pip install transformers sentencepiece)\n\n\nYou can then use this model to translate SQL queries into plain english.\n\n\n\nThe whole training process and hyperparameters are in my GitHub repo\n\n> Created by Dor Bernsohn"
] |
feature-extraction
|
generic
|
# Feature Extraction repository template
This is a template repository for feature extraction to support generic inference with Hugging Face Hub generic Inference API. There are two required steps
1. Specify the requirements by defining a `requirements.txt` file.
2. Implement the `pipeline.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload all the elements needed for inference (model, processors, tokenizers, etc.). This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
Example repos
* https://huggingface.co/osanseviero/fasttext_english
## How to start
First create a repo in https://hf.co/new.
Then clone this template and push it to your repo.
```
git clone https://huggingface.co/templates/feature-extraction
cd feature-extraction
git remote set-url origin https://huggingface.co/$YOUR_USER/$YOUR_REPO_NAME
git push --force
```
|
{"library_name": "generic", "tags": ["feature-extraction"]}
|
dbguilherme/teste
| null |
[
"generic",
"feature-extraction",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#generic #feature-extraction #region-us
|
# Feature Extraction repository template
This is a template repository for feature extraction to support generic inference with Hugging Face Hub generic Inference API. There are two required steps
1. Specify the requirements by defining a 'URL' file.
2. Implement the 'URL' '__init__' and '__call__' methods. These methods are called by the Inference API. The '__init__' method should load the model and preload all the elements needed for inference (model, processors, tokenizers, etc.). This is only called once. The '__call__' method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
Example repos
* URL
## How to start
First create a repo in URL
Then clone this template and push it to your repo.
|
[
"# Feature Extraction repository template\n\nThis is a template repository for feature extraction to support generic inference with Hugging Face Hub generic Inference API. There are two required steps\n\n1. Specify the requirements by defining a 'URL' file.\n2. Implement the 'URL' '__init__' and '__call__' methods. These methods are called by the Inference API. The '__init__' method should load the model and preload all the elements needed for inference (model, processors, tokenizers, etc.). This is only called once. The '__call__' method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.\n\nExample repos\n* URL",
"## How to start\nFirst create a repo in URL \nThen clone this template and push it to your repo."
] |
[
"TAGS\n#generic #feature-extraction #region-us \n",
"# Feature Extraction repository template\n\nThis is a template repository for feature extraction to support generic inference with Hugging Face Hub generic Inference API. There are two required steps\n\n1. Specify the requirements by defining a 'URL' file.\n2. Implement the 'URL' '__init__' and '__call__' methods. These methods are called by the Inference API. The '__init__' method should load the model and preload all the elements needed for inference (model, processors, tokenizers, etc.). This is only called once. The '__call__' method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.\n\nExample repos\n* URL",
"## How to start\nFirst create a repo in URL \nThen clone this template and push it to your repo."
] |
fill-mask
|
transformers
|
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Pretraining
## Multilingual model
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "finnish", "license": "mit", "widget": [{"text": "T\u00e4k\u00e4l\u00e4inen sanomalehdist\u00f6 [MASK] erit - t\u00e4in"}]}
|
dbmdz/bert-base-finnish-europeana-cased
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"finnish"
] |
TAGS
#transformers #pytorch #jax #tensorboard #bert #fill-mask #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Historic Language Models (HLMs)
===============================
Languages
---------
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
Language: German, Training data: Europeana, Size: 13-28GB (filtered)
Language: French, Training data: Europeana, Size: 11-31GB (filtered)
Language: English, Training data: British Library, Size: 24GB (year filtered)
Language: Finnish, Training data: Europeana, Size: 1.2GB
Language: Swedish, Training data: Europeana, Size: 1.1GB
Models
------
At the moment, the following models are available on the model hub:
Corpora Stats
=============
German Europeana Corpus
-----------------------
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:
!German Europeana Corpus Stats
French Europeana Corpus
-----------------------
Like German, we use different ocr confidence thresholds:
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:
!French Europeana Corpus Stats
British Library Corpus
----------------------
Metadata is taken from here. Stats incl. year filtering:
We use the year filtered variant. The following plot shows a tokens per year distribution:
!British Library Corpus Stats
Finnish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Finnish Europeana Corpus Stats
Swedish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Swedish Europeana Corpus Stats
All Corpora
-----------
The following plot shows a tokens per year distribution of the complete training corpus:
!All Corpora Stats
Multilingual Vocab generation
=============================
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
We then calculate the subword fertility rate and portion of '[UNK]'s over the following NER corpora:
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
Language: German, Subword fertility: 1.43, Unknown portion: 0.0004
Language: French, Subword fertility: 1.25, Unknown portion: 0.0001
Language: English, Subword fertility: 1.25, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.69, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.43, Unknown portion: 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
Language: German, Subword fertility: 1.31, Unknown portion: 0.0004
Language: French, Subword fertility: 1.16, Unknown portion: 0.0001
Language: English, Subword fertility: 1.17, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.54, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.32, Unknown portion: 0.0
Final pretraining corpora
=========================
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
Total size is 130GB.
Pretraining
===========
Multilingual model
------------------
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
The following plot shows the pretraining loss curve:
!Training loss curve
English model
-------------
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Finnish model
-------------
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Swedish model
-------------
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #bert #fill-mask #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz BERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources French Europeana BERT models 🎉
# French Europeana BERT
We extracted all French texts using the `language` metadata attribute from the Europeana corpus.
The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.
Based on the metadata information, texts from the 18th - 20th century are mainly included in the
training corpus.
Detailed information about the data and pretraining steps can be found in
[this repository](https://github.com/stefan-it/europeana-bert).
## Model weights
BERT model weights for PyTorch and TensorFlow are available.
* French Europeana BERT: `dbmdz/bert-base-french-europeana-cased` - [model hub page](https://huggingface.co/dbmdz/bert-base-french-europeana-cased/tree/main)
## Results
For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
## Usage
With Transformers >= 2.3 our French Europeana BERT model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-french-europeana-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-french-europeana-cased")
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT model just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download our model from their S3 storage 🤗
|
{"language": "fr", "license": "mit", "tags": ["historic french"]}
|
dbmdz/bert-base-french-europeana-cased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"historic french",
"fr",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #tf #jax #bert #historic french #fr #license-mit #endpoints_compatible #region-us
|
# + dbmdz BERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources French Europeana BERT models
# French Europeana BERT
We extracted all French texts using the 'language' metadata attribute from the Europeana corpus.
The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.
Based on the metadata information, texts from the 18th - 20th century are mainly included in the
training corpus.
Detailed information about the data and pretraining steps can be found in
this repository.
## Model weights
BERT model weights for PyTorch and TensorFlow are available.
* French Europeana BERT: 'dbmdz/bert-base-french-europeana-cased' - model hub page
## Results
For results on Historic NER, please refer to this repository.
## Usage
With Transformers >= 2.3 our French Europeana BERT model can be loaded like:
# Huggingface model hub
All models are available on the Huggingface model hub.
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT model just open an issue
here
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download our model from their S3 storage
|
[
"# + dbmdz BERT model\n\nIn this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\nLibrary open sources French Europeana BERT models",
"# French Europeana BERT\n\nWe extracted all French texts using the 'language' metadata attribute from the Europeana corpus.\n\nThe resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.\n\nBased on the metadata information, texts from the 18th - 20th century are mainly included in the\ntraining corpus.\n\nDetailed information about the data and pretraining steps can be found in\nthis repository.",
"## Model weights\n\nBERT model weights for PyTorch and TensorFlow are available.\n\n* French Europeana BERT: 'dbmdz/bert-base-french-europeana-cased' - model hub page",
"## Results\n\nFor results on Historic NER, please refer to this repository.",
"## Usage\n\nWith Transformers >= 2.3 our French Europeana BERT model can be loaded like:",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our BERT model just open an issue\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download our model from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #historic french #fr #license-mit #endpoints_compatible #region-us \n",
"# + dbmdz BERT model\n\nIn this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\nLibrary open sources French Europeana BERT models",
"# French Europeana BERT\n\nWe extracted all French texts using the 'language' metadata attribute from the Europeana corpus.\n\nThe resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.\n\nBased on the metadata information, texts from the 18th - 20th century are mainly included in the\ntraining corpus.\n\nDetailed information about the data and pretraining steps can be found in\nthis repository.",
"## Model weights\n\nBERT model weights for PyTorch and TensorFlow are available.\n\n* French Europeana BERT: 'dbmdz/bert-base-french-europeana-cased' - model hub page",
"## Results\n\nFor results on Historic NER, please refer to this repository.",
"## Usage\n\nWith Transformers >= 2.3 our French Europeana BERT model can be loaded like:",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our BERT model just open an issue\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download our model from their S3 storage"
] |
fill-mask
|
transformers
|
# 🤗 + 📚 dbmdz German BERT models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources another German BERT models 🎉
# German BERT
## Stats
In addition to the recently released [German BERT](https://deepset.ai/german-bert)
model by [deepset](https://deepset.ai/) we provide another German-language model.
The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus,
Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with
a size of 16GB and 2,350,234,427 tokens.
For sentence splitting, we use [spacy](https://spacy.io/). Our preprocessing steps
(sentence piece model for vocab generation) follow those used for training
[SciBERT](https://github.com/allenai/scibert). The model is trained with an initial
sequence length of 512 subwords and was performed for 1.5M steps.
This release includes both cased and uncased models.
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `bert-base-german-dbmdz-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt)
| `bert-base-german-dbmdz-uncased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt)
## Usage
With Transformers >= 2.3 our German BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased")
```
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/fine-tuned-berts-seq).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "de", "license": "mit"}
|
dbmdz/bert-base-german-cased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
+ dbmdz German BERT models
==========================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources another German BERT models
German BERT
===========
Stats
-----
In addition to the recently released German BERT
model by deepset we provide another German-language model.
The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus,
Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with
a size of 16GB and 2,350,234,427 tokens.
For sentence splitting, we use spacy. Our preprocessing steps
(sentence piece model for vocab generation) follow those used for training
SciBERT. The model is trained with an initial
sequence length of 512 subwords and was performed for 1.5M steps.
This release includes both cased and uncased models.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Usage
-----
With Transformers >= 2.3 our German BERT models can be loaded like:
Results
-------
For results on downstream tasks like NER or PoS tagging, please refer to
this repository.
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT models just open an issue
here
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz BERT models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources German Europeana BERT models 🎉
# German Europeana BERT
We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/)
that were provided by *The European Library*. The final
training corpus has a size of 51GB and consists of 8,035,986,369 tokens.
Detailed information about the data and pretraining steps can be found in
[this repository](https://github.com/stefan-it/europeana-bert).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ------------------------------------------ | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-cased/vocab.txt)
## Results
For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
## Usage
With Transformers >= 2.3 our German Europeana BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-europeana-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-german-europeana-cased")
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "de", "license": "mit", "tags": ["historic german"]}
|
dbmdz/bert-base-german-europeana-cased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"historic german",
"de",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #jax #bert #historic german #de #license-mit #endpoints_compatible #region-us
|
+ dbmdz BERT models
===================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources German Europeana BERT models
German Europeana BERT
=====================
We use the open source Europeana newspapers
that were provided by *The European Library*. The final
training corpus has a size of 51GB and consists of 8,035,986,369 tokens.
Detailed information about the data and pretraining steps can be found in
this repository.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Results
-------
For results on Historic NER, please refer to this repository.
Usage
-----
With Transformers >= 2.3 our German Europeana BERT models can be loaded like:
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT models just open an issue
here
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #historic german #de #license-mit #endpoints_compatible #region-us \n"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz BERT models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources German Europeana BERT models 🎉
# German Europeana BERT
We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/)
that were provided by *The European Library*. The final
training corpus has a size of 51GB and consists of 8,035,986,369 tokens.
Detailed information about the data and pretraining steps can be found in
[this repository](https://github.com/stefan-it/europeana-bert).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ------------------------------------------ | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-german-europeana-uncased/vocab.txt)
## Results
For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
## Usage
With Transformers >= 2.3 our German Europeana BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-europeana-uncased")
model = AutoModel.from_pretrained("dbmdz/bert-base-german-europeana-uncased")
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "de", "license": "mit", "tags": ["historic german"]}
|
dbmdz/bert-base-german-europeana-uncased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"historic german",
"de",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #jax #bert #historic german #de #license-mit #endpoints_compatible #region-us
|
+ dbmdz BERT models
===================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources German Europeana BERT models
German Europeana BERT
=====================
We use the open source Europeana newspapers
that were provided by *The European Library*. The final
training corpus has a size of 51GB and consists of 8,035,986,369 tokens.
Detailed information about the data and pretraining steps can be found in
this repository.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Results
-------
For results on Historic NER, please refer to this repository.
Usage
-----
With Transformers >= 2.3 our German Europeana BERT models can be loaded like:
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT models just open an issue
here
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #historic german #de #license-mit #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
# 🤗 + 📚 dbmdz German BERT models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources another German BERT models 🎉
# German BERT
## Stats
In addition to the recently released [German BERT](https://deepset.ai/german-bert)
model by [deepset](https://deepset.ai/) we provide another German-language model.
The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus,
Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with
a size of 16GB and 2,350,234,427 tokens.
For sentence splitting, we use [spacy](https://spacy.io/). Our preprocessing steps
(sentence piece model for vocab generation) follow those used for training
[SciBERT](https://github.com/allenai/scibert). The model is trained with an initial
sequence length of 512 subwords and was performed for 1.5M steps.
This release includes both cased and uncased models.
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `bert-base-german-dbmdz-cased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt)
| `bert-base-german-dbmdz-uncased` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-config.json) • [`pytorch_model.bin`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-pytorch_model.bin) • [`vocab.txt`](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt)
## Usage
With Transformers >= 2.3 our German BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased")
```
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/fine-tuned-berts-seq).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "de", "license": "mit"}
|
dbmdz/bert-base-german-uncased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
+ dbmdz German BERT models
==========================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources another German BERT models
German BERT
===========
Stats
-----
In addition to the recently released German BERT
model by deepset we provide another German-language model.
The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus,
Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with
a size of 16GB and 2,350,234,427 tokens.
For sentence splitting, we use spacy. Our preprocessing steps
(sentence piece model for vocab generation) follow those used for training
SciBERT. The model is trained with an initial
sequence length of 512 subwords and was performed for 1.5M steps.
This release includes both cased and uncased models.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Usage
-----
With Transformers >= 2.3 our German BERT models can be loaded like:
Results
-------
For results on downstream tasks like NER or PoS tagging, please refer to
this repository.
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT models just open an issue
here
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
# Language Model for Historic Dutch
In this repository we open source a language model for Historic Dutch, trained on the
[Delpher Corpus](https://www.delpher.nl/over-delpher/delpher-open-krantenarchief/download-teksten-kranten-1618-1879\),
that include digitized texts from Dutch newspapers, ranging from 1618 to 1879.
# Changelog
* 13.12.2021: Initial version of this repository.
# Model Zoo
The following models for Historic Dutch are available on the Hugging Face Model Hub:
| Model identifier | Model Hub link
| -------------------------------------- | -------------------------------------------------------------------
| `dbmdz/bert-base-historic-dutch-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-dutch-cased)
# Stats
The download urls for all archives can be found [here](delpher-corpus.urls).
We then used the awesome `alto-tools` from [this](https://github.com/cneud/alto-tools)
repository to extract plain text. The following table shows the size overview per year range:
| Period | Extracted plain text size
| --------- | -------------------------:
| 1618-1699 | 170MB
| 1700-1709 | 103MB
| 1710-1719 | 65MB
| 1720-1729 | 137MB
| 1730-1739 | 144MB
| 1740-1749 | 188MB
| 1750-1759 | 171MB
| 1760-1769 | 235MB
| 1770-1779 | 271MB
| 1780-1789 | 414MB
| 1790-1799 | 614MB
| 1800-1809 | 734MB
| 1810-1819 | 807MB
| 1820-1829 | 987MB
| 1830-1839 | 1.7GB
| 1840-1849 | 2.2GB
| 1850-1854 | 1.3GB
| 1855-1859 | 1.7GB
| 1860-1864 | 2.0GB
| 1865-1869 | 2.3GB
| 1870-1874 | 1.9GB
| 1875-1876 | 867MB
| 1877-1879 | 1.9GB
The total training corpus consists of 427,181,269 sentences and 3,509,581,683 tokens (counted via `wc`),
resulting in a total corpus size of 21GB.
The following figure shows an overview of the number of chars per year distribution:

# Language Model Pretraining
We use the official [BERT](https://github.com/google-research/bert) implementation using the following command
to train the model:
```bash
python3 run_pretraining.py --input_file gs://delpher-bert/tfrecords/*.tfrecord \
--output_dir gs://delpher-bert/bert-base-historic-dutch-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
We train the model for 3M steps using a total batch size of 128 on a v3-32 TPU. The pretraining loss curve can be seen
in the next figure:

# Evaluation
We evaluate our model on the preprocessed Europeana NER dataset for Dutch, that was presented in the
["Data Centric Domain Adaptation for Historical Text with OCR Errors"](https://github.com/stefan-it/historic-domain-adaptation-icdar) paper.
The data is available in their repository. We perform a hyper-parameter search for:
* Batch sizes: `[4, 8]`
* Learning rates: `[3e-5, 5e-5]`
* Number of epochs: `[5, 10]`
and report averaged F1-Score over 5 runs with different seeds. We also include [hmBERT](https://github.com/stefan-it/clef-hipe/blob/main/hlms.md) as baseline model.
Results:
| Model | F1-Score (Dev / Test)
| ------------------- | ---------------------
| hmBERT | (82.73) / 81.34
| Maerz et al. (2021) | - / 84.2
| Ours | (89.73) / 87.45
# License
All models are licensed under [MIT](LICENSE).
# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
We thank [Clemens Neudecker](https://github.com/cneud) for maintaining the amazing
[ALTO tools](https://github.com/cneud/alto-tools) that were used for parsing the Delpher Corpus XML files.
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "dutch", "license": "mit", "widget": [{"text": "de [MASK] vau Financien, in hec vorige jaar, da inkomswi"}]}
|
dbmdz/bert-base-historic-dutch-cased
| null |
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"dutch"
] |
TAGS
#transformers #pytorch #tf #tensorboard #bert #fill-mask #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Language Model for Historic Dutch
=================================
In this repository we open source a language model for Historic Dutch, trained on the
Delpher Corpus,
that include digitized texts from Dutch newspapers, ranging from 1618 to 1879.
Changelog
=========
* 13.12.2021: Initial version of this repository.
Model Zoo
=========
The following models for Historic Dutch are available on the Hugging Face Model Hub:
Stats
=====
The download urls for all archives can be found here.
We then used the awesome 'alto-tools' from this
repository to extract plain text. The following table shows the size overview per year range:
The total training corpus consists of 427,181,269 sentences and 3,509,581,683 tokens (counted via 'wc'),
resulting in a total corpus size of 21GB.
The following figure shows an overview of the number of chars per year distribution:
!Delpher Corpus Stats
Language Model Pretraining
==========================
We use the official BERT implementation using the following command
to train the model:
We train the model for 3M steps using a total batch size of 128 on a v3-32 TPU. The pretraining loss curve can be seen
in the next figure:
!Delpher Pretraining Loss Curve
Evaluation
==========
We evaluate our model on the preprocessed Europeana NER dataset for Dutch, that was presented in the
"Data Centric Domain Adaptation for Historical Text with OCR Errors" paper.
The data is available in their repository. We perform a hyper-parameter search for:
* Batch sizes: '[4, 8]'
* Learning rates: '[3e-5, 5e-5]'
* Number of epochs: '[5, 10]'
and report averaged F1-Score over 5 runs with different seeds. We also include hmBERT as baseline model.
Results:
License
=======
All models are licensed under MIT.
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️
We thank Clemens Neudecker for maintaining the amazing
ALTO tools that were used for parsing the Delpher Corpus XML files.
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #tensorboard #bert #fill-mask #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
🚨 Notice: After re-checking this model again, it seems that the model is not working very well. E.g. MLM predictions are very likely to predict `[UNK]` token, which is
actually not good.
We will update this model soon. For now, please use the [`bigscience-historical-texts/bert-base-blbooks-cased`](https://huggingface.co/bigscience-historical-texts/bert-base-blbooks-cased) instead, as it was pretrained on the same corpus.
|
{"language": "en", "license": "mit", "widget": [{"text": "and I cannot conceive the reafon why [MASK] hath"}]}
|
dbmdz/bert-base-historic-english-cased
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #tensorboard #safetensors #bert #fill-mask #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Notice: After re-checking this model again, it seems that the model is not working very well. E.g. MLM predictions are very likely to predict '[UNK]' token, which is
actually not good.
We will update this model soon. For now, please use the 'bigscience-historical-texts/bert-base-blbooks-cased' instead, as it was pretrained on the same corpus.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #safetensors #bert #fill-mask #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
# hmBERT: Historical Multilingual Language Models for Named Entity Recognition
More information about our hmBERT model can be found in our new paper:
["hmBERT: Historical Multilingual Language Models for Named Entity Recognition"](https://arxiv.org/abs/2205.15575).
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Smaller Models
We have also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Pretraining
## Multilingual model
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "multilingual", "license": "mit", "widget": [{"text": "and I cannot conceive the reafon why [MASK] hath"}, {"text": "T\u00e4k\u00e4l\u00e4inen sanomalehdist\u00f6 [MASK] erit - t\u00e4in"}, {"text": "Det vore [MASK] h\u00e4ller n\u00f6dv\u00e4ndigt att be"}, {"text": "Comme, \u00e0 cette \u00e9poque [MASK] \u00e9tait celle de la"}, {"text": "In [MASK] an atmosph\u00e4rischen Nahrungsmitteln"}]}
|
dbmdz/bert-base-historic-multilingual-cased
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"arxiv:2205.15575",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2205.15575"
] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #jax #tensorboard #safetensors #bert #fill-mask #multilingual #arxiv-2205.15575 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
hmBERT: Historical Multilingual Language Models for Named Entity Recognition
============================================================================
More information about our hmBERT model can be found in our new paper:
"hmBERT: Historical Multilingual Language Models for Named Entity Recognition".
Languages
---------
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
Language: German, Training data: Europeana, Size: 13-28GB (filtered)
Language: French, Training data: Europeana, Size: 11-31GB (filtered)
Language: English, Training data: British Library, Size: 24GB (year filtered)
Language: Finnish, Training data: Europeana, Size: 1.2GB
Language: Swedish, Training data: Europeana, Size: 1.1GB
Smaller Models
--------------
We have also released smaller models for the multilingual model:
Corpora Stats
=============
German Europeana Corpus
-----------------------
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:
!German Europeana Corpus Stats
French Europeana Corpus
-----------------------
Like German, we use different ocr confidence thresholds:
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:
!French Europeana Corpus Stats
British Library Corpus
----------------------
Metadata is taken from here. Stats incl. year filtering:
We use the year filtered variant. The following plot shows a tokens per year distribution:
!British Library Corpus Stats
Finnish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Finnish Europeana Corpus Stats
Swedish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Swedish Europeana Corpus Stats
All Corpora
-----------
The following plot shows a tokens per year distribution of the complete training corpus:
!All Corpora Stats
Multilingual Vocab generation
=============================
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
We then calculate the subword fertility rate and portion of '[UNK]'s over the following NER corpora:
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
Language: German, Subword fertility: 1.43, Unknown portion: 0.0004
Language: French, Subword fertility: 1.25, Unknown portion: 0.0001
Language: English, Subword fertility: 1.25, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.69, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.43, Unknown portion: 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
Language: German, Subword fertility: 1.31, Unknown portion: 0.0004
Language: French, Subword fertility: 1.16, Unknown portion: 0.0001
Language: English, Subword fertility: 1.17, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.54, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.32, Unknown portion: 0.0
Final pretraining corpora
=========================
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
Total size is 130GB.
Pretraining
===========
Multilingual model
------------------
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
The following plot shows the pretraining loss curve:
!Training loss curve
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #safetensors #bert #fill-mask #multilingual #arxiv-2205.15575 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
# 🤗 + 📚 dbmdz BERT and ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models 🎉
# Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in `config.json`. However, the model is working and all
evaluations were done under those circumstances.
See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/italian-bertelectra).
## Usage
With Transformers >= 2.3 our Italian BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the (recommended) Italian XXL BERT models, just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-xxl-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the Italian XXL ELECTRA model (discriminator), just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT/ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "it", "license": "mit", "datasets": ["wikipedia"]}
|
dbmdz/bert-base-italian-cased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"it",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"it"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #it #dataset-wikipedia #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
+ dbmdz BERT and ELECTRA models
===============================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models
Italian BERT
============
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the OPUS corpora collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the OSCAR corpus.
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in 'URL'. However, the model is working and all
evaluations were done under those circumstances.
See this issue for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
BERTurk.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Results
-------
For results on downstream tasks like NER or PoS tagging, please refer to
this repository.
Usage
-----
With Transformers >= 2.3 our Italian BERT models can be loaded like:
To load the (recommended) Italian XXL BERT models, just use:
To load the Italian XXL ELECTRA model (discriminator), just use:
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT/ELECTRA models just open an issue
here
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #it #dataset-wikipedia #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
# 🤗 + 📚 dbmdz BERT and ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models 🎉
# Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in `config.json`. However, the model is working and all
evaluations were done under those circumstances.
See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/italian-bertelectra).
## Usage
With Transformers >= 2.3 our Italian BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the (recommended) Italian XXL BERT models, just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-xxl-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the Italian XXL ELECTRA model (discriminator), just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT/ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "it", "license": "mit", "datasets": ["wikipedia"]}
|
dbmdz/bert-base-italian-uncased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"it",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"it"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #it #dataset-wikipedia #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
+ dbmdz BERT and ELECTRA models
===============================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models
Italian BERT
============
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the OPUS corpora collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the OSCAR corpus.
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in 'URL'. However, the model is working and all
evaluations were done under those circumstances.
See this issue for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
BERTurk.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Results
-------
For results on downstream tasks like NER or PoS tagging, please refer to
this repository.
Usage
-----
With Transformers >= 2.3 our Italian BERT models can be loaded like:
To load the (recommended) Italian XXL BERT models, just use:
To load the Italian XXL ELECTRA model (discriminator), just use:
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT/ELECTRA models just open an issue
here
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #it #dataset-wikipedia #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
# 🤗 + 📚 dbmdz BERT and ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models 🎉
# Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in `config.json`. However, the model is working and all
evaluations were done under those circumstances.
See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/italian-bertelectra).
## Usage
With Transformers >= 2.3 our Italian BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the (recommended) Italian XXL BERT models, just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-xxl-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the Italian XXL ELECTRA model (discriminator), just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT/ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "it", "license": "mit", "datasets": ["wikipedia"]}
|
dbmdz/bert-base-italian-xxl-cased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"it",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"it"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #it #dataset-wikipedia #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
+ dbmdz BERT and ELECTRA models
===============================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models
Italian BERT
============
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the OPUS corpora collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the OSCAR corpus.
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in 'URL'. However, the model is working and all
evaluations were done under those circumstances.
See this issue for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
BERTurk.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Results
-------
For results on downstream tasks like NER or PoS tagging, please refer to
this repository.
Usage
-----
With Transformers >= 2.3 our Italian BERT models can be loaded like:
To load the (recommended) Italian XXL BERT models, just use:
To load the Italian XXL ELECTRA model (discriminator), just use:
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT/ELECTRA models just open an issue
here
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #it #dataset-wikipedia #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
# 🤗 + 📚 dbmdz BERT and ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models 🎉
# Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in `config.json`. However, the model is working and all
evaluations were done under those circumstances.
See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/italian-bertelectra).
## Usage
With Transformers >= 2.3 our Italian BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the (recommended) Italian XXL BERT models, just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-xxl-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the Italian XXL ELECTRA model (discriminator), just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT/ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "it", "license": "mit", "datasets": ["wikipedia"]}
|
dbmdz/bert-base-italian-xxl-uncased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"it",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"it"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #it #dataset-wikipedia #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
+ dbmdz BERT and ELECTRA models
===============================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models
Italian BERT
============
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the OPUS corpora collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the OSCAR corpus.
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in 'URL'. However, the model is working and all
evaluations were done under those circumstances.
See this issue for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
BERTurk.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Results
-------
For results on downstream tasks like NER or PoS tagging, please refer to
this repository.
Usage
-----
With Transformers >= 2.3 our Italian BERT models can be loaded like:
To load the (recommended) Italian XXL BERT models, just use:
To load the Italian XXL ELECTRA model (discriminator), just use:
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT/ELECTRA models just open an issue
here
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #it #dataset-wikipedia #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Pretraining
## Multilingual model
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "swedish", "license": "mit", "widget": [{"text": "Det vore [MASK] h\u00e4ller n\u00f6dv\u00e4ndigt att be"}]}
|
dbmdz/bert-base-swedish-europeana-cased
| null |
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"swedish"
] |
TAGS
#transformers #pytorch #jax #tensorboard #bert #fill-mask #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Historic Language Models (HLMs)
===============================
Languages
---------
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
Language: German, Training data: Europeana, Size: 13-28GB (filtered)
Language: French, Training data: Europeana, Size: 11-31GB (filtered)
Language: English, Training data: British Library, Size: 24GB (year filtered)
Language: Finnish, Training data: Europeana, Size: 1.2GB
Language: Swedish, Training data: Europeana, Size: 1.1GB
Models
------
At the moment, the following models are available on the model hub:
Corpora Stats
=============
German Europeana Corpus
-----------------------
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:
!German Europeana Corpus Stats
French Europeana Corpus
-----------------------
Like German, we use different ocr confidence thresholds:
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:
!French Europeana Corpus Stats
British Library Corpus
----------------------
Metadata is taken from here. Stats incl. year filtering:
We use the year filtered variant. The following plot shows a tokens per year distribution:
!British Library Corpus Stats
Finnish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Finnish Europeana Corpus Stats
Swedish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Swedish Europeana Corpus Stats
All Corpora
-----------
The following plot shows a tokens per year distribution of the complete training corpus:
!All Corpora Stats
Multilingual Vocab generation
=============================
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
We then calculate the subword fertility rate and portion of '[UNK]'s over the following NER corpora:
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
Language: German, Subword fertility: 1.43, Unknown portion: 0.0004
Language: French, Subword fertility: 1.25, Unknown portion: 0.0001
Language: English, Subword fertility: 1.25, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.69, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.43, Unknown portion: 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
Language: German, Subword fertility: 1.31, Unknown portion: 0.0004
Language: French, Subword fertility: 1.16, Unknown portion: 0.0001
Language: English, Subword fertility: 1.17, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.54, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.32, Unknown portion: 0.0
Final pretraining corpora
=========================
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
Total size is 130GB.
Pretraining
===========
Multilingual model
------------------
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
The following plot shows the pretraining loss curve:
!Training loss curve
English model
-------------
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Finnish model
-------------
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Swedish model
-------------
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #jax #tensorboard #bert #fill-mask #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz Turkish BERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased model for Turkish 🎉
# 🇹🇷 BERTurk
BERTurk is a community-driven cased BERT model for Turkish.
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the model name: BERTurk.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-8 for 2M steps.
For this model we use a vocab size of 128k.
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ------------------------------------ | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-turkish-128k-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-cased/vocab.txt)
## Usage
With Transformers >= 2.3 our BERTurk cased model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-128k-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-128k-cased")
```
## Results
For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "tr", "license": "mit"}
|
dbmdz/bert-base-turkish-128k-cased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"tr",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #jax #bert #tr #license-mit #endpoints_compatible #has_space #region-us
|
+ dbmdz Turkish BERT model
==========================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased model for Turkish
🇹🇷 BERTurk
==========
BERTurk is a community-driven cased BERT model for Turkish.
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the model name: BERTurk.
Stats
-----
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish OSCAR corpus,
a recent Wikipedia dump, various OPUS corpora and a
special corpus provided by Kemal Oflazer.
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-8 for 2M steps.
For this model we use a vocab size of 128k.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Usage
-----
With Transformers >= 2.3 our BERTurk cased model can be loaded like:
Results
-------
For results on PoS tagging or NER tasks, please refer to
this repository.
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT models just open an issue
here
Acknowledgments
===============
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #tr #license-mit #endpoints_compatible #has_space #region-us \n"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz Turkish BERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources an uncased model for Turkish 🎉
# 🇹🇷 BERTurk
BERTurk is a community-driven uncased BERT model for Turkish.
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the model name: BERTurk.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train an uncased model
on a TPU v3-8 for 2M steps.
For this model we use a vocab size of 128k.
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| -------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-turkish-128k-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-128k-uncased/vocab.txt)
## Usage
With Transformers >= 2.3 our BERTurk uncased model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-128k-uncased")
model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-128k-uncased")
```
## Results
For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "tr", "license": "mit"}
|
dbmdz/bert-base-turkish-128k-uncased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"tr",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #jax #bert #tr #license-mit #endpoints_compatible #has_space #region-us
|
+ dbmdz Turkish BERT model
==========================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources an uncased model for Turkish
🇹🇷 BERTurk
==========
BERTurk is a community-driven uncased BERT model for Turkish.
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the model name: BERTurk.
Stats
-----
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish OSCAR corpus,
a recent Wikipedia dump, various OPUS corpora and a
special corpus provided by Kemal Oflazer.
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train an uncased model
on a TPU v3-8 for 2M steps.
For this model we use a vocab size of 128k.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Usage
-----
With Transformers >= 2.3 our BERTurk uncased model can be loaded like:
Results
-------
For results on PoS tagging or NER tasks, please refer to
this repository.
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT models just open an issue
here
Acknowledgments
===============
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #tr #license-mit #endpoints_compatible #has_space #region-us \n"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz Turkish BERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased model for Turkish 🎉
# 🇹🇷 BERTurk
BERTurk is a community-driven cased BERT model for Turkish.
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the model name: BERTurk.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-8 for 2M steps.
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| --------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-turkish-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-cased/vocab.txt)
## Usage
With Transformers >= 2.3 our BERTurk cased model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-cased")
```
## Results
For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "tr", "license": "mit"}
|
dbmdz/bert-base-turkish-cased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"tr",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #jax #bert #tr #license-mit #endpoints_compatible #has_space #region-us
|
+ dbmdz Turkish BERT model
==========================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased model for Turkish
🇹🇷 BERTurk
==========
BERTurk is a community-driven cased BERT model for Turkish.
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the model name: BERTurk.
Stats
-----
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish OSCAR corpus,
a recent Wikipedia dump, various OPUS corpora and a
special corpus provided by Kemal Oflazer.
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-8 for 2M steps.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Usage
-----
With Transformers >= 2.3 our BERTurk cased model can be loaded like:
Results
-------
For results on PoS tagging or NER tasks, please refer to
this repository.
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT models just open an issue
here
Acknowledgments
===============
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #tr #license-mit #endpoints_compatible #has_space #region-us \n"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz Turkish BERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources an uncased model for Turkish 🎉
# 🇹🇷 BERTurk
BERTurk is a community-driven uncased BERT model for Turkish.
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the model name: BERTurk.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train an uncased model
on a TPU v3-8 for 2M steps.
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| --------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-turkish-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-turkish-uncased/vocab.txt)
## Usage
With Transformers >= 2.3 our BERTurk uncased model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-uncased")
model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-uncased")
```
## Results
For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "tr", "license": "mit"}
|
dbmdz/bert-base-turkish-uncased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"tr",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #jax #bert #tr #license-mit #endpoints_compatible #has_space #region-us
|
+ dbmdz Turkish BERT model
==========================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources an uncased model for Turkish
🇹🇷 BERTurk
==========
BERTurk is a community-driven uncased BERT model for Turkish.
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the model name: BERTurk.
Stats
-----
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish OSCAR corpus,
a recent Wikipedia dump, various OPUS corpora and a
special corpus provided by Kemal Oflazer.
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train an uncased model
on a TPU v3-8 for 2M steps.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Usage
-----
With Transformers >= 2.3 our BERTurk uncased model can be loaded like:
Results
-------
For results on PoS tagging or NER tasks, please refer to
this repository.
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT models just open an issue
here
Acknowledgments
===============
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #tr #license-mit #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
We also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
**Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see
[this repo](https://github.com/stefan-it/europeana-bert) for more information:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased)
| `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Smaller multilingual models
Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962)
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
| Model (Layer / Hidden size) | Parameters | Pre-Training time
| --------------------------- | ----------: | ----------------------:
| hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps
| hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps
| hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps
| hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps
| hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps
We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset:

# Pretraining
## Multilingual model - hmBERT Base
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## Smaller multilingual models
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:

### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:

### hmBERT Small
The following plot shows the pretraining loss curve for the small model:

### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "multilingual", "license": "mit", "widget": [{"text": "and I cannot conceive the reafon why [MASK] hath"}, {"text": "T\u00e4k\u00e4l\u00e4inen sanomalehdist\u00f6 [MASK] erit - t\u00e4in"}, {"text": "Det vore [MASK] h\u00e4ller n\u00f6dv\u00e4ndigt att be"}, {"text": "Comme, \u00e0 cette \u00e9poque [MASK] \u00e9tait celle de la"}, {"text": "In [MASK] an atmosph\u00e4rischen Nahrungsmitteln"}]}
|
dbmdz/bert-medium-historic-multilingual-cased
| null |
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"arxiv:1908.08962",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.08962"
] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #tf #tensorboard #safetensors #bert #fill-mask #multilingual #arxiv-1908.08962 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Historic Language Models (HLMs)
===============================
Languages
---------
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
Language: German, Training data: Europeana, Size: 13-28GB (filtered)
Language: French, Training data: Europeana, Size: 11-31GB (filtered)
Language: English, Training data: British Library, Size: 24GB (year filtered)
Language: Finnish, Training data: Europeana, Size: 1.2GB
Language: Swedish, Training data: Europeana, Size: 1.1GB
Models
------
At the moment, the following models are available on the model hub:
We also released smaller models for the multilingual model:
Notice: We have released language models for Historic German and French trained on more noisier data earlier - see
this repo for more information:
Corpora Stats
=============
German Europeana Corpus
-----------------------
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:
!German Europeana Corpus Stats
French Europeana Corpus
-----------------------
Like German, we use different ocr confidence thresholds:
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:
!French Europeana Corpus Stats
British Library Corpus
----------------------
Metadata is taken from here. Stats incl. year filtering:
We use the year filtered variant. The following plot shows a tokens per year distribution:
!British Library Corpus Stats
Finnish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Finnish Europeana Corpus Stats
Swedish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Swedish Europeana Corpus Stats
All Corpora
-----------
The following plot shows a tokens per year distribution of the complete training corpus:
!All Corpora Stats
Multilingual Vocab generation
=============================
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
We then calculate the subword fertility rate and portion of '[UNK]'s over the following NER corpora:
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
Language: German, Subword fertility: 1.43, Unknown portion: 0.0004
Language: French, Subword fertility: 1.25, Unknown portion: 0.0001
Language: English, Subword fertility: 1.25, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.69, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.43, Unknown portion: 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
Language: German, Subword fertility: 1.31, Unknown portion: 0.0004
Language: French, Subword fertility: 1.16, Unknown portion: 0.0001
Language: English, Subword fertility: 1.17, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.54, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.32, Unknown portion: 0.0
Final pretraining corpora
=========================
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
Total size is 130GB.
Smaller multilingual models
===========================
Inspired by the "Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
We then perform downstream evaluations on the multilingual NewsEye dataset:
!NewsEye hmBERT Evaluation
Pretraining
===========
Multilingual model - hmBERT Base
--------------------------------
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
The following plot shows the pretraining loss curve:
!Training loss curve
Smaller multilingual models
---------------------------
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:
!Training loss curve
### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:
!Training loss curve
### hmBERT Small
The following plot shows the pretraining loss curve for the small model:
!Training loss curve
### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:
!Training loss curve
English model
-------------
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Finnish model
-------------
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Swedish model
-------------
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[
"### hmBERT Tiny\n\n\nThe following plot shows the pretraining loss curve for the tiny model:\n\n\n!Training loss curve",
"### hmBERT Mini\n\n\nThe following plot shows the pretraining loss curve for the mini model:\n\n\n!Training loss curve",
"### hmBERT Small\n\n\nThe following plot shows the pretraining loss curve for the small model:\n\n\n!Training loss curve",
"### hmBERT Medium\n\n\nThe following plot shows the pretraining loss curve for the medium model:\n\n\n!Training loss curve\n\n\nEnglish model\n-------------\n\n\nThe English BERT model - with texts from British Library corpus - was trained with the Hugging Face\nJAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nFinnish model\n-------------\n\n\nThe BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nSwedish model\n-------------\n\n\nThe BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nAcknowledgments\n===============\n\n\nResearch supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as\nTensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️\n\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #tf #tensorboard #safetensors #bert #fill-mask #multilingual #arxiv-1908.08962 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### hmBERT Tiny\n\n\nThe following plot shows the pretraining loss curve for the tiny model:\n\n\n!Training loss curve",
"### hmBERT Mini\n\n\nThe following plot shows the pretraining loss curve for the mini model:\n\n\n!Training loss curve",
"### hmBERT Small\n\n\nThe following plot shows the pretraining loss curve for the small model:\n\n\n!Training loss curve",
"### hmBERT Medium\n\n\nThe following plot shows the pretraining loss curve for the medium model:\n\n\n!Training loss curve\n\n\nEnglish model\n-------------\n\n\nThe English BERT model - with texts from British Library corpus - was trained with the Hugging Face\nJAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nFinnish model\n-------------\n\n\nThe BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nSwedish model\n-------------\n\n\nThe BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nAcknowledgments\n===============\n\n\nResearch supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as\nTensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️\n\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
fill-mask
|
transformers
|
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
We also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
**Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see
[this repo](https://github.com/stefan-it/europeana-bert) for more information:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased)
| `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Smaller multilingual models
Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962)
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
| Model (Layer / Hidden size) | Parameters | Pre-Training time
| --------------------------- | ----------: | ----------------------:
| hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps
| hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps
| hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps
| hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps
| hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps
We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset:

# Pretraining
## Multilingual model - hmBERT Base
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## Smaller multilingual models
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:

### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:

### hmBERT Small
The following plot shows the pretraining loss curve for the small model:

### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "multilingual", "license": "mit", "widget": [{"text": "and I cannot conceive the reafon why [MASK] hath"}, {"text": "T\u00e4k\u00e4l\u00e4inen sanomalehdist\u00f6 [MASK] erit - t\u00e4in"}, {"text": "Det vore [MASK] h\u00e4ller n\u00f6dv\u00e4ndigt att be"}, {"text": "Comme, \u00e0 cette \u00e9poque [MASK] \u00e9tait celle de la"}, {"text": "In [MASK] an atmosph\u00e4rischen Nahrungsmitteln"}]}
|
dbmdz/bert-mini-historic-multilingual-cased
| null |
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"arxiv:1908.08962",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.08962"
] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #tf #tensorboard #safetensors #bert #fill-mask #multilingual #arxiv-1908.08962 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Historic Language Models (HLMs)
===============================
Languages
---------
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
Language: German, Training data: Europeana, Size: 13-28GB (filtered)
Language: French, Training data: Europeana, Size: 11-31GB (filtered)
Language: English, Training data: British Library, Size: 24GB (year filtered)
Language: Finnish, Training data: Europeana, Size: 1.2GB
Language: Swedish, Training data: Europeana, Size: 1.1GB
Models
------
At the moment, the following models are available on the model hub:
We also released smaller models for the multilingual model:
Notice: We have released language models for Historic German and French trained on more noisier data earlier - see
this repo for more information:
Corpora Stats
=============
German Europeana Corpus
-----------------------
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:
!German Europeana Corpus Stats
French Europeana Corpus
-----------------------
Like German, we use different ocr confidence thresholds:
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:
!French Europeana Corpus Stats
British Library Corpus
----------------------
Metadata is taken from here. Stats incl. year filtering:
We use the year filtered variant. The following plot shows a tokens per year distribution:
!British Library Corpus Stats
Finnish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Finnish Europeana Corpus Stats
Swedish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Swedish Europeana Corpus Stats
All Corpora
-----------
The following plot shows a tokens per year distribution of the complete training corpus:
!All Corpora Stats
Multilingual Vocab generation
=============================
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
We then calculate the subword fertility rate and portion of '[UNK]'s over the following NER corpora:
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
Language: German, Subword fertility: 1.43, Unknown portion: 0.0004
Language: French, Subword fertility: 1.25, Unknown portion: 0.0001
Language: English, Subword fertility: 1.25, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.69, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.43, Unknown portion: 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
Language: German, Subword fertility: 1.31, Unknown portion: 0.0004
Language: French, Subword fertility: 1.16, Unknown portion: 0.0001
Language: English, Subword fertility: 1.17, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.54, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.32, Unknown portion: 0.0
Final pretraining corpora
=========================
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
Total size is 130GB.
Smaller multilingual models
===========================
Inspired by the "Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
We then perform downstream evaluations on the multilingual NewsEye dataset:
!NewsEye hmBERT Evaluation
Pretraining
===========
Multilingual model - hmBERT Base
--------------------------------
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
The following plot shows the pretraining loss curve:
!Training loss curve
Smaller multilingual models
---------------------------
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:
!Training loss curve
### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:
!Training loss curve
### hmBERT Small
The following plot shows the pretraining loss curve for the small model:
!Training loss curve
### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:
!Training loss curve
English model
-------------
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Finnish model
-------------
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Swedish model
-------------
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[
"### hmBERT Tiny\n\n\nThe following plot shows the pretraining loss curve for the tiny model:\n\n\n!Training loss curve",
"### hmBERT Mini\n\n\nThe following plot shows the pretraining loss curve for the mini model:\n\n\n!Training loss curve",
"### hmBERT Small\n\n\nThe following plot shows the pretraining loss curve for the small model:\n\n\n!Training loss curve",
"### hmBERT Medium\n\n\nThe following plot shows the pretraining loss curve for the medium model:\n\n\n!Training loss curve\n\n\nEnglish model\n-------------\n\n\nThe English BERT model - with texts from British Library corpus - was trained with the Hugging Face\nJAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nFinnish model\n-------------\n\n\nThe BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nSwedish model\n-------------\n\n\nThe BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nAcknowledgments\n===============\n\n\nResearch supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as\nTensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️\n\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #tf #tensorboard #safetensors #bert #fill-mask #multilingual #arxiv-1908.08962 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### hmBERT Tiny\n\n\nThe following plot shows the pretraining loss curve for the tiny model:\n\n\n!Training loss curve",
"### hmBERT Mini\n\n\nThe following plot shows the pretraining loss curve for the mini model:\n\n\n!Training loss curve",
"### hmBERT Small\n\n\nThe following plot shows the pretraining loss curve for the small model:\n\n\n!Training loss curve",
"### hmBERT Medium\n\n\nThe following plot shows the pretraining loss curve for the medium model:\n\n\n!Training loss curve\n\n\nEnglish model\n-------------\n\n\nThe English BERT model - with texts from British Library corpus - was trained with the Hugging Face\nJAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nFinnish model\n-------------\n\n\nThe BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nSwedish model\n-------------\n\n\nThe BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nAcknowledgments\n===============\n\n\nResearch supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as\nTensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️\n\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
fill-mask
|
transformers
|
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
We also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
**Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see
[this repo](https://github.com/stefan-it/europeana-bert) for more information:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased)
| `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Smaller multilingual models
Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962)
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
| Model (Layer / Hidden size) | Parameters | Pre-Training time
| --------------------------- | ----------: | ----------------------:
| hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps
| hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps
| hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps
| hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps
| hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps
We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset:

# Pretraining
## Multilingual model - hmBERT Base
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## Smaller multilingual models
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:

### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:

### hmBERT Small
The following plot shows the pretraining loss curve for the small model:

### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "multilingual", "license": "mit", "widget": [{"text": "and I cannot conceive the reafon why [MASK] hath"}, {"text": "T\u00e4k\u00e4l\u00e4inen sanomalehdist\u00f6 [MASK] erit - t\u00e4in"}, {"text": "Det vore [MASK] h\u00e4ller n\u00f6dv\u00e4ndigt att be"}, {"text": "Comme, \u00e0 cette \u00e9poque [MASK] \u00e9tait celle de la"}, {"text": "In [MASK] an atmosph\u00e4rischen Nahrungsmitteln"}]}
|
dbmdz/bert-small-historic-multilingual-cased
| null |
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"arxiv:1908.08962",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.08962"
] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #tf #tensorboard #safetensors #bert #fill-mask #multilingual #arxiv-1908.08962 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Historic Language Models (HLMs)
===============================
Languages
---------
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
Language: German, Training data: Europeana, Size: 13-28GB (filtered)
Language: French, Training data: Europeana, Size: 11-31GB (filtered)
Language: English, Training data: British Library, Size: 24GB (year filtered)
Language: Finnish, Training data: Europeana, Size: 1.2GB
Language: Swedish, Training data: Europeana, Size: 1.1GB
Models
------
At the moment, the following models are available on the model hub:
We also released smaller models for the multilingual model:
Notice: We have released language models for Historic German and French trained on more noisier data earlier - see
this repo for more information:
Corpora Stats
=============
German Europeana Corpus
-----------------------
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:
!German Europeana Corpus Stats
French Europeana Corpus
-----------------------
Like German, we use different ocr confidence thresholds:
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:
!French Europeana Corpus Stats
British Library Corpus
----------------------
Metadata is taken from here. Stats incl. year filtering:
We use the year filtered variant. The following plot shows a tokens per year distribution:
!British Library Corpus Stats
Finnish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Finnish Europeana Corpus Stats
Swedish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Swedish Europeana Corpus Stats
All Corpora
-----------
The following plot shows a tokens per year distribution of the complete training corpus:
!All Corpora Stats
Multilingual Vocab generation
=============================
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
We then calculate the subword fertility rate and portion of '[UNK]'s over the following NER corpora:
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
Language: German, Subword fertility: 1.43, Unknown portion: 0.0004
Language: French, Subword fertility: 1.25, Unknown portion: 0.0001
Language: English, Subword fertility: 1.25, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.69, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.43, Unknown portion: 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
Language: German, Subword fertility: 1.31, Unknown portion: 0.0004
Language: French, Subword fertility: 1.16, Unknown portion: 0.0001
Language: English, Subword fertility: 1.17, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.54, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.32, Unknown portion: 0.0
Final pretraining corpora
=========================
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
Total size is 130GB.
Smaller multilingual models
===========================
Inspired by the "Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
We then perform downstream evaluations on the multilingual NewsEye dataset:
!NewsEye hmBERT Evaluation
Pretraining
===========
Multilingual model - hmBERT Base
--------------------------------
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
The following plot shows the pretraining loss curve:
!Training loss curve
Smaller multilingual models
---------------------------
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:
!Training loss curve
### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:
!Training loss curve
### hmBERT Small
The following plot shows the pretraining loss curve for the small model:
!Training loss curve
### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:
!Training loss curve
English model
-------------
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Finnish model
-------------
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Swedish model
-------------
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[
"### hmBERT Tiny\n\n\nThe following plot shows the pretraining loss curve for the tiny model:\n\n\n!Training loss curve",
"### hmBERT Mini\n\n\nThe following plot shows the pretraining loss curve for the mini model:\n\n\n!Training loss curve",
"### hmBERT Small\n\n\nThe following plot shows the pretraining loss curve for the small model:\n\n\n!Training loss curve",
"### hmBERT Medium\n\n\nThe following plot shows the pretraining loss curve for the medium model:\n\n\n!Training loss curve\n\n\nEnglish model\n-------------\n\n\nThe English BERT model - with texts from British Library corpus - was trained with the Hugging Face\nJAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nFinnish model\n-------------\n\n\nThe BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nSwedish model\n-------------\n\n\nThe BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nAcknowledgments\n===============\n\n\nResearch supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as\nTensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️\n\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #tf #tensorboard #safetensors #bert #fill-mask #multilingual #arxiv-1908.08962 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### hmBERT Tiny\n\n\nThe following plot shows the pretraining loss curve for the tiny model:\n\n\n!Training loss curve",
"### hmBERT Mini\n\n\nThe following plot shows the pretraining loss curve for the mini model:\n\n\n!Training loss curve",
"### hmBERT Small\n\n\nThe following plot shows the pretraining loss curve for the small model:\n\n\n!Training loss curve",
"### hmBERT Medium\n\n\nThe following plot shows the pretraining loss curve for the medium model:\n\n\n!Training loss curve\n\n\nEnglish model\n-------------\n\n\nThe English BERT model - with texts from British Library corpus - was trained with the Hugging Face\nJAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nFinnish model\n-------------\n\n\nThe BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nSwedish model\n-------------\n\n\nThe BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nAcknowledgments\n===============\n\n\nResearch supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as\nTensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️\n\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
fill-mask
|
transformers
|
# Historic Language Models (HLMs)
## Languages
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
| Language | Training data | Size
| -------- | ------------- | ----
| German | [Europeana](http://www.europeana-newspapers.eu/) | 13-28GB (filtered)
| French | [Europeana](http://www.europeana-newspapers.eu/) | 11-31GB (filtered)
| English | [British Library](https://data.bl.uk/digbks/db14.html) | 24GB (year filtered)
| Finnish | [Europeana](http://www.europeana-newspapers.eu/) | 1.2GB
| Swedish | [Europeana](http://www.europeana-newspapers.eu/) | 1.1GB
## Models
At the moment, the following models are available on the model hub:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
| `dbmdz/bert-base-historic-english-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-english-cased)
| `dbmdz/bert-base-finnish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-finnish-europeana-cased)
| `dbmdz/bert-base-swedish-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-swedish-europeana-cased)
We also released smaller models for the multilingual model:
| Model identifier | Model Hub link
| ----------------------------------------------- | ---------------------------------------------------------------------------
| `dbmdz/bert-tiny-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-tiny-historic-multilingual-cased)
| `dbmdz/bert-mini-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-mini-historic-multilingual-cased)
| `dbmdz/bert-small-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-small-historic-multilingual-cased)
| `dbmdz/bert-medium-historic-multilingual-cased` | [here](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased)
**Notice**: We have released language models for Historic German and French trained on more noisier data earlier - see
[this repo](https://github.com/stefan-it/europeana-bert) for more information:
| Model identifier | Model Hub link
| --------------------------------------------- | --------------------------------------------------------------------------
| `dbmdz/bert-base-german-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-german-europeana-cased)
| `dbmdz/bert-base-french-europeana-cased` | [here](https://huggingface.co/dbmdz/bert-base-french-europeana-cased)
# Corpora Stats
## German Europeana Corpus
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
| OCR confidence | Size
| -------------- | ----
| **0.60** | 28GB
| 0.65 | 18GB
| 0.70 | 13GB
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:

## French Europeana Corpus
Like German, we use different ocr confidence thresholds:
| OCR confidence | Size
| -------------- | ----
| 0.60 | 31GB
| 0.65 | 27GB
| **0.70** | 27GB
| 0.75 | 23GB
| 0.80 | 11GB
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:

## British Library Corpus
Metadata is taken from [here](https://data.bl.uk/digbks/DB21.html). Stats incl. year filtering:
| Years | Size
| ----------------- | ----
| ALL | 24GB
| >= 1800 && < 1900 | 24GB
We use the year filtered variant. The following plot shows a tokens per year distribution:

## Finnish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.2GB
The following plot shows a tokens per year distribution:

## Swedish Europeana Corpus
| OCR confidence | Size
| -------------- | ----
| 0.60 | 1.1GB
The following plot shows a tokens per year distribution:

## All Corpora
The following plot shows a tokens per year distribution of the complete training corpus:

# Multilingual Vocab generation
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
| Language | Size
| -------- | ----
| German | 10GB
| French | 10GB
| English | 10GB
| Finnish | 9.5GB
| Swedish | 9.7GB
We then calculate the subword fertility rate and portion of `[UNK]`s over the following NER corpora:
| Language | NER corpora
| -------- | ------------------
| German | CLEF-HIPE, NewsEye
| French | CLEF-HIPE, NewsEye
| English | CLEF-HIPE
| Finnish | NewsEye
| Swedish | NewsEye
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.43 | 0.0004
| French | 1.25 | 0.0001
| English | 1.25 | 0.0
| Finnish | 1.69 | 0.0007
| Swedish | 1.43 | 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
| Language | Subword fertility | Unknown portion
| -------- | ------------------ | ---------------
| German | 1.31 | 0.0004
| French | 1.16 | 0.0001
| English | 1.17 | 0.0
| Finnish | 1.54 | 0.0007
| Swedish | 1.32 | 0.0
# Final pretraining corpora
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
| Language | Size
| -------- | ----
| German | 28GB
| French | 27GB
| English | 24GB
| Finnish | 27GB
| Swedish | 27GB
Total size is 130GB.
# Smaller multilingual models
Inspired by the ["Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"](https://arxiv.org/abs/1908.08962)
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
| Model (Layer / Hidden size) | Parameters | Pre-Training time
| --------------------------- | ----------: | ----------------------:
| hmBERT Tiny ( 2/128) | 4.58M | 4.3 sec / 1,000 steps
| hmBERT Mini ( 4/256) | 11.55M | 10.5 sec / 1,000 steps
| hmBERT Small ( 4/512) | 29.52M | 20.7 sec / 1,000 steps
| hmBERT Medium ( 8/512) | 42.13M | 35.0 sec / 1,000 steps
| hmBERT Base (12/768) | 110.62M | 80.0 sec / 1,000 steps
We then perform downstream evaluations on the multilingual [NewsEye](https://zenodo.org/record/4573313#.Ya3oVr-ZNzU) dataset:

# Pretraining
## Multilingual model - hmBERT Base
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
```bash
python3 run_pretraining.py --input_file gs://histolectra/historic-multilingual-tfrecords/*.tfrecord \
--output_dir gs://histolectra/bert-base-historic-multilingual-cased \
--bert_config_file ./config.json \
--max_seq_length=512 \
--max_predictions_per_seq=75 \
--do_train=True \
--train_batch_size=128 \
--num_train_steps=3000000 \
--learning_rate=1e-4 \
--save_checkpoints_steps=100000 \
--keep_checkpoint_max=20 \
--use_tpu=True \
--tpu_name=electra-2 \
--num_tpu_cores=32
```
The following plot shows the pretraining loss curve:

## Smaller multilingual models
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:

### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:

### hmBERT Small
The following plot shows the pretraining loss curve for the small model:

### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:

## English model
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-historic-english-cased/ \
--tokenizer_name /mnt/datasets/bert-base-historic-english-cased/ \
--train_file /mnt/datasets/bl-corpus/bl_1800-1900_extracted.txt \
--validation_file /mnt/datasets/bl-corpus/english_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 10 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-historic-english-cased-512-noadafactor-10e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Finnish model
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-finnish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Finnish_0.6.txt \
--validation_file /mnt/datasets/hlms/finnish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-finnish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

## Swedish model
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
```bash
python3 run_mlm_flax.py --model_type bert \
--config_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--tokenizer_name /mnt/datasets/bert-base-swedish-europeana-cased/ \
--train_file /mnt/datasets/hlms/extracted_content_Swedish_0.6.txt \
--validation_file /mnt/datasets/hlms/swedish_validation.txt \
--max_seq_length 512 \
--per_device_train_batch_size 16 \
--learning_rate 1e-4 \
--num_train_epochs 40 \
--preprocessing_num_workers 96 \
--output_dir /mnt/datasets/bert-base-swedish-europeana-cased-512-dupe1-noadafactor-40e \
--save_steps 2500 \
--eval_steps 2500 \
--warmup_steps 10000 \
--line_by_line \
--pad_to_max_length
```
The following plot shows the pretraining loss curve:

# Acknowledgments
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "multilingual", "license": "mit", "widget": [{"text": "and I cannot conceive the reafon why [MASK] hath"}, {"text": "T\u00e4k\u00e4l\u00e4inen sanomalehdist\u00f6 [MASK] erit - t\u00e4in"}, {"text": "Det vore [MASK] h\u00e4ller n\u00f6dv\u00e4ndigt att be"}, {"text": "Comme, \u00e0 cette \u00e9poque [MASK] \u00e9tait celle de la"}, {"text": "In [MASK] an atmosph\u00e4rischen Nahrungsmitteln"}]}
|
dbmdz/bert-tiny-historic-multilingual-cased
| null |
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"arxiv:1908.08962",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.08962"
] |
[
"multilingual"
] |
TAGS
#transformers #pytorch #tf #tensorboard #safetensors #bert #fill-mask #multilingual #arxiv-1908.08962 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
Historic Language Models (HLMs)
===============================
Languages
---------
Our Historic Language Models Zoo contains support for the following languages - incl. their training data source:
Language: German, Training data: Europeana, Size: 13-28GB (filtered)
Language: French, Training data: Europeana, Size: 11-31GB (filtered)
Language: English, Training data: British Library, Size: 24GB (year filtered)
Language: Finnish, Training data: Europeana, Size: 1.2GB
Language: Swedish, Training data: Europeana, Size: 1.1GB
Models
------
At the moment, the following models are available on the model hub:
We also released smaller models for the multilingual model:
Notice: We have released language models for Historic German and French trained on more noisier data earlier - see
this repo for more information:
Corpora Stats
=============
German Europeana Corpus
-----------------------
We provide some statistics using different thresholds of ocr confidences, in order to shrink down the corpus size
and use less-noisier data:
For the final corpus we use a OCR confidence of 0.6 (28GB). The following plot shows a tokens per year distribution:
!German Europeana Corpus Stats
French Europeana Corpus
-----------------------
Like German, we use different ocr confidence thresholds:
For the final corpus we use a OCR confidence of 0.7 (27GB). The following plot shows a tokens per year distribution:
!French Europeana Corpus Stats
British Library Corpus
----------------------
Metadata is taken from here. Stats incl. year filtering:
We use the year filtered variant. The following plot shows a tokens per year distribution:
!British Library Corpus Stats
Finnish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Finnish Europeana Corpus Stats
Swedish Europeana Corpus
------------------------
The following plot shows a tokens per year distribution:
!Swedish Europeana Corpus Stats
All Corpora
-----------
The following plot shows a tokens per year distribution of the complete training corpus:
!All Corpora Stats
Multilingual Vocab generation
=============================
For the first attempt, we use the first 10GB of each pretraining corpus. We upsample both Finnish and Swedish to ~10GB.
The following tables shows the exact size that is used for generating a 32k and 64k subword vocabs:
We then calculate the subword fertility rate and portion of '[UNK]'s over the following NER corpora:
Breakdown of subword fertility rate and unknown portion per language for the 32k vocab:
Language: German, Subword fertility: 1.43, Unknown portion: 0.0004
Language: French, Subword fertility: 1.25, Unknown portion: 0.0001
Language: English, Subword fertility: 1.25, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.69, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.43, Unknown portion: 0.0
Breakdown of subword fertility rate and unknown portion per language for the 64k vocab:
Language: German, Subword fertility: 1.31, Unknown portion: 0.0004
Language: French, Subword fertility: 1.16, Unknown portion: 0.0001
Language: English, Subword fertility: 1.17, Unknown portion: 0.0
Language: Finnish, Subword fertility: 1.54, Unknown portion: 0.0007
Language: Swedish, Subword fertility: 1.32, Unknown portion: 0.0
Final pretraining corpora
=========================
We upsample Swedish and Finnish to ~27GB. The final stats for all pretraining corpora can be seen here:
Total size is 130GB.
Smaller multilingual models
===========================
Inspired by the "Well-Read Students Learn Better: On the Importance of Pre-training Compact Models"
paper, we train smaller models (different layers and hidden sizes), and report number of parameters and pre-training costs:
We then perform downstream evaluations on the multilingual NewsEye dataset:
!NewsEye hmBERT Evaluation
Pretraining
===========
Multilingual model - hmBERT Base
--------------------------------
We train a multilingual BERT model using the 32k vocab with the official BERT implementation
on a v3-32 TPU using the following parameters:
The following plot shows the pretraining loss curve:
!Training loss curve
Smaller multilingual models
---------------------------
We use the same parameters as used for training the base model.
### hmBERT Tiny
The following plot shows the pretraining loss curve for the tiny model:
!Training loss curve
### hmBERT Mini
The following plot shows the pretraining loss curve for the mini model:
!Training loss curve
### hmBERT Small
The following plot shows the pretraining loss curve for the small model:
!Training loss curve
### hmBERT Medium
The following plot shows the pretraining loss curve for the medium model:
!Training loss curve
English model
-------------
The English BERT model - with texts from British Library corpus - was trained with the Hugging Face
JAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Finnish model
-------------
The BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Swedish model
-------------
The BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face
JAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:
The following plot shows the pretraining loss curve:
!Training loss curve
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as
TensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[
"### hmBERT Tiny\n\n\nThe following plot shows the pretraining loss curve for the tiny model:\n\n\n!Training loss curve",
"### hmBERT Mini\n\n\nThe following plot shows the pretraining loss curve for the mini model:\n\n\n!Training loss curve",
"### hmBERT Small\n\n\nThe following plot shows the pretraining loss curve for the small model:\n\n\n!Training loss curve",
"### hmBERT Medium\n\n\nThe following plot shows the pretraining loss curve for the medium model:\n\n\n!Training loss curve\n\n\nEnglish model\n-------------\n\n\nThe English BERT model - with texts from British Library corpus - was trained with the Hugging Face\nJAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nFinnish model\n-------------\n\n\nThe BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nSwedish model\n-------------\n\n\nThe BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nAcknowledgments\n===============\n\n\nResearch supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as\nTensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️\n\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #tf #tensorboard #safetensors #bert #fill-mask #multilingual #arxiv-1908.08962 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### hmBERT Tiny\n\n\nThe following plot shows the pretraining loss curve for the tiny model:\n\n\n!Training loss curve",
"### hmBERT Mini\n\n\nThe following plot shows the pretraining loss curve for the mini model:\n\n\n!Training loss curve",
"### hmBERT Small\n\n\nThe following plot shows the pretraining loss curve for the small model:\n\n\n!Training loss curve",
"### hmBERT Medium\n\n\nThe following plot shows the pretraining loss curve for the medium model:\n\n\n!Training loss curve\n\n\nEnglish model\n-------------\n\n\nThe English BERT model - with texts from British Library corpus - was trained with the Hugging Face\nJAX/FLAX implementation for 10 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nFinnish model\n-------------\n\n\nThe BERT model - with texts from Finnish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 1M steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nSwedish model\n-------------\n\n\nThe BERT model - with texts from Swedish part of Europeana - was trained with the Hugging Face\nJAX/FLAX implementation for 40 epochs (approx. 660K steps) on a v3-8 TPU, using the following command:\n\n\nThe following plot shows the pretraining loss curve:\n\n\n!Training loss curve\n\n\nAcknowledgments\n===============\n\n\nResearch supported with Cloud TPUs from Google's TPU Research Cloud (TRC) program, previously known as\nTensorFlow Research Cloud (TFRC). Many thanks for providing access to the TRC ️\n\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
feature-extraction
|
transformers
|
# 🤗 + 📚 dbmdz ConvBERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a German Europeana ConvBERT model 🎉
# German Europeana ConvBERT
We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/)
that were provided by *The European Library*. The final
training corpus has a size of 51GB and consists of 8,035,986,369 tokens.
Detailed information about the data and pretraining steps can be found in
[this repository](https://github.com/stefan-it/europeana-bert).
## Results
For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
## Usage
With Transformers >= 4.3 our German Europeana ConvBERT model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/convbert-base-german-europeana-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
# Huggingface model hub
All other German Europeana models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion
[here](https://github.com/stefan-it/europeana-bert/discussions) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "de", "license": "mit", "tags": ["historic german"]}
|
dbmdz/convbert-base-german-europeana-cased
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"convbert",
"feature-extraction",
"historic german",
"de",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #safetensors #convbert #feature-extraction #historic german #de #license-mit #endpoints_compatible #region-us
|
# + dbmdz ConvBERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a German Europeana ConvBERT model
# German Europeana ConvBERT
We use the open source Europeana newspapers
that were provided by *The European Library*. The final
training corpus has a size of 51GB and consists of 8,035,986,369 tokens.
Detailed information about the data and pretraining steps can be found in
this repository.
## Results
For results on Historic NER, please refer to this repository.
## Usage
With Transformers >= 4.3 our German Europeana ConvBERT model can be loaded like:
# Huggingface model hub
All other German Europeana models are available on the Huggingface model hub.
# Contact (Bugs, Feedback, Contribution and more)
For questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion
here
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[
"# + dbmdz ConvBERT model\n\nIn this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\nLibrary open sources a German Europeana ConvBERT model",
"# German Europeana ConvBERT\n\nWe use the open source Europeana newspapers\nthat were provided by *The European Library*. The final\ntraining corpus has a size of 51GB and consists of 8,035,986,369 tokens.\n\nDetailed information about the data and pretraining steps can be found in\nthis repository.",
"## Results\n\nFor results on Historic NER, please refer to this repository.",
"## Usage\n\nWith Transformers >= 4.3 our German Europeana ConvBERT model can be loaded like:",
"# Huggingface model hub\n\nAll other German Europeana models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #convbert #feature-extraction #historic german #de #license-mit #endpoints_compatible #region-us \n",
"# + dbmdz ConvBERT model\n\nIn this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\nLibrary open sources a German Europeana ConvBERT model",
"# German Europeana ConvBERT\n\nWe use the open source Europeana newspapers\nthat were provided by *The European Library*. The final\ntraining corpus has a size of 51GB and consists of 8,035,986,369 tokens.\n\nDetailed information about the data and pretraining steps can be found in\nthis repository.",
"## Results\n\nFor results on Historic NER, please refer to this repository.",
"## Usage\n\nWith Transformers >= 4.3 our German Europeana ConvBERT model can be loaded like:",
"# Huggingface model hub\n\nAll other German Europeana models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
feature-extraction
|
transformers
|
# 🤗 + 📚 dbmdz Turkish ConvBERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased ConvBERT model for Turkish 🎉
# 🇹🇷 ConvBERTurk
ConvBERTurk is a community-driven cased ConvBERT model for Turkish.
In addition to the BERT and ELECTRA based models, we also trained a ConvBERT model. The ConvBERT architecture is presented
in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper.
We follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128
sequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-32 TPU.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-32!
## Usage
With Transformers >= 4.3 our cased ConvBERT model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/convbert-base-turkish-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
## Results
For results on PoS tagging, NER and Question Answering downstream tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our DBMDZ BERT models in general, just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "tr", "license": "mit"}
|
dbmdz/convbert-base-turkish-cased
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"convbert",
"feature-extraction",
"tr",
"arxiv:2008.02496",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2008.02496"
] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #safetensors #convbert #feature-extraction #tr #arxiv-2008.02496 #license-mit #endpoints_compatible #region-us
|
# + dbmdz Turkish ConvBERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased ConvBERT model for Turkish
# 🇹🇷 ConvBERTurk
ConvBERTurk is a community-driven cased ConvBERT model for Turkish.
In addition to the BERT and ELECTRA based models, we also trained a ConvBERT model. The ConvBERT architecture is presented
in the "ConvBERT: Improving BERT with Span-based Dynamic Convolution" paper.
We follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128
sequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-32 TPU.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish OSCAR corpus,
a recent Wikipedia dump, various OPUS corpora and a
special corpus provided by Kemal Oflazer.
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-32!
## Usage
With Transformers >= 4.3 our cased ConvBERT model can be loaded like:
## Results
For results on PoS tagging, NER and Question Answering downstream tasks, please refer to
this repository.
# Huggingface model hub
All models are available on the Huggingface model hub.
# Contact (Bugs, Feedback, Contribution and more)
For questions about our DBMDZ BERT models in general, just open an issue
here
# Acknowledgments
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[
"# + dbmdz Turkish ConvBERT model\n\nIn this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\nLibrary open sources a cased ConvBERT model for Turkish",
"# 🇹🇷 ConvBERTurk\n\nConvBERTurk is a community-driven cased ConvBERT model for Turkish.\n\nIn addition to the BERT and ELECTRA based models, we also trained a ConvBERT model. The ConvBERT architecture is presented\nin the \"ConvBERT: Improving BERT with Span-based Dynamic Convolution\" paper.\n\nWe follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128\nsequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-32 TPU.",
"## Stats\n\nThe current version of the model is trained on a filtered and sentence\nsegmented version of the Turkish OSCAR corpus,\na recent Wikipedia dump, various OPUS corpora and a\nspecial corpus provided by Kemal Oflazer.\n\nThe final training corpus has a size of 35GB and 44,04,976,662 tokens.\n\nThanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model\non a TPU v3-32!",
"## Usage\n\nWith Transformers >= 4.3 our cased ConvBERT model can be loaded like:",
"## Results\n\nFor results on PoS tagging, NER and Question Answering downstream tasks, please refer to\nthis repository.",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our DBMDZ BERT models in general, just open an issue\nhere",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #convbert #feature-extraction #tr #arxiv-2008.02496 #license-mit #endpoints_compatible #region-us \n",
"# + dbmdz Turkish ConvBERT model\n\nIn this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\nLibrary open sources a cased ConvBERT model for Turkish",
"# 🇹🇷 ConvBERTurk\n\nConvBERTurk is a community-driven cased ConvBERT model for Turkish.\n\nIn addition to the BERT and ELECTRA based models, we also trained a ConvBERT model. The ConvBERT architecture is presented\nin the \"ConvBERT: Improving BERT with Span-based Dynamic Convolution\" paper.\n\nWe follow a different training procedure: instead of using a two-phase approach, that pre-trains the model for 90% with 128\nsequence length and 10% with 512 sequence length, we pre-train the model with 512 sequence length for 1M steps on a v3-32 TPU.",
"## Stats\n\nThe current version of the model is trained on a filtered and sentence\nsegmented version of the Turkish OSCAR corpus,\na recent Wikipedia dump, various OPUS corpora and a\nspecial corpus provided by Kemal Oflazer.\n\nThe final training corpus has a size of 35GB and 44,04,976,662 tokens.\n\nThanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model\non a TPU v3-32!",
"## Usage\n\nWith Transformers >= 4.3 our cased ConvBERT model can be loaded like:",
"## Results\n\nFor results on PoS tagging, NER and Question Answering downstream tasks, please refer to\nthis repository.",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our DBMDZ BERT models in general, just open an issue\nhere",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
fill-mask
|
transformers
|
# 🇹🇷 Turkish ConvBERT model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png">
</p>
[](https://zenodo.org/badge/latestdoi/237817454)
We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
# Stats
We've trained an (cased) ConvBERT model on the recently released Turkish part of the
[multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ConvBERT
In addition to the ELEC**TR**A base model, we also trained an ConvBERT model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz)
using their model name.
Example usage with 🤗/Transformers:
```python
tokenizer = AutoTokenizer.from_pretrained("dbmdz/convbert-base-turkish-mc4-cased")
model = AutoModel.from_pretrained("dbmdz/convbert-base-turkish-mc4-cased")
```
# Citation
You can use the following BibTeX entry for citation:
```bibtex
@software{stefan_schweter_2020_3770924,
author = {Stefan Schweter},
title = {BERTurk - BERT models for Turkish},
month = apr,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3770924},
url = {https://doi.org/10.5281/zenodo.3770924}
}
```
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
|
{"language": "tr", "license": "mit", "datasets": ["allenai/c4"]}
|
dbmdz/convbert-base-turkish-mc4-cased
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"convbert",
"fill-mask",
"tr",
"dataset:allenai/c4",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #safetensors #convbert #fill-mask #tr #dataset-allenai/c4 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# 🇹🇷 Turkish ConvBERT model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="URL
</p>
 ConvBERT model on the recently released Turkish part of the
multiligual C4 (mC4) corpus from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ConvBERT
In addition to the ELECTRA base model, we also trained an ConvBERT model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the DBMDZ Hugging Face model hub page
using their model name.
Example usage with /Transformers:
You can use the following BibTeX entry for citation:
# Acknowledgments
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank Merve Noyan for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
|
[
"# 🇹🇷 Turkish ConvBERT model\n\n<p align=\"center\">\n <img alt=\"Logo provided by Merve Noyan\" title=\"Awesome logo from Merve Noyan\" src=\"URL\n</p>\n\n ConvBERT model on the recently released Turkish part of the\nmultiligual C4 (mC4) corpus from the AI2 team.\n\nAfter filtering documents with a broken encoding, the training corpus has a size of 242GB resulting\nin 31,240,963,926 tokens.\n\nWe used the original 32k vocab (instead of creating a new one).",
"# mC4 ConvBERT\n\nIn addition to the ELECTRA base model, we also trained an ConvBERT model on the Turkish part of the mC4 corpus. We use a\nsequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.",
"# Model usage\n\nAll trained models can be used from the DBMDZ Hugging Face model hub page\nusing their model name.\n\nExample usage with /Transformers:\n\n\n\nYou can use the following BibTeX entry for citation:",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nWe would like to thank Merve Noyan for the\nawesome logo!\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #convbert #fill-mask #tr #dataset-allenai/c4 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# 🇹🇷 Turkish ConvBERT model\n\n<p align=\"center\">\n <img alt=\"Logo provided by Merve Noyan\" title=\"Awesome logo from Merve Noyan\" src=\"URL\n</p>\n\n ConvBERT model on the recently released Turkish part of the\nmultiligual C4 (mC4) corpus from the AI2 team.\n\nAfter filtering documents with a broken encoding, the training corpus has a size of 242GB resulting\nin 31,240,963,926 tokens.\n\nWe used the original 32k vocab (instead of creating a new one).",
"# mC4 ConvBERT\n\nIn addition to the ELECTRA base model, we also trained an ConvBERT model on the Turkish part of the mC4 corpus. We use a\nsequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.",
"# Model usage\n\nAll trained models can be used from the DBMDZ Hugging Face model hub page\nusing their model name.\n\nExample usage with /Transformers:\n\n\n\nYou can use the following BibTeX entry for citation:",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nWe would like to thank Merve Noyan for the\nawesome logo!\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️"
] |
fill-mask
|
transformers
|
# 🇹🇷 Turkish ConvBERT model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png">
</p>
[](https://zenodo.org/badge/latestdoi/237817454)
We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
# Stats
We've trained an (uncased) ConvBERT model on the recently released Turkish part of the
[multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ConvBERT
In addition to the ELEC**TR**A base model, we also trained an ConvBERT model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz)
using their model name.
Example usage with 🤗/Transformers:
```python
tokenizer = AutoTokenizer.from_pretrained("dbmdz/convbert-base-turkish-mc4-uncased")
model = AutoModel.from_pretrained("dbmdz/convbert-base-turkish-mc4-uncased")
```
# Citation
You can use the following BibTeX entry for citation:
```bibtex
@software{stefan_schweter_2020_3770924,
author = {Stefan Schweter},
title = {BERTurk - BERT models for Turkish},
month = apr,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3770924},
url = {https://doi.org/10.5281/zenodo.3770924}
}
```
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
|
{"language": "tr", "license": "mit", "datasets": ["allenai/c4"]}
|
dbmdz/convbert-base-turkish-mc4-uncased
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"convbert",
"fill-mask",
"tr",
"dataset:allenai/c4",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #safetensors #convbert #fill-mask #tr #dataset-allenai/c4 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# 🇹🇷 Turkish ConvBERT model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="URL
</p>
 ConvBERT model on the recently released Turkish part of the
multiligual C4 (mC4) corpus from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ConvBERT
In addition to the ELECTRA base model, we also trained an ConvBERT model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the DBMDZ Hugging Face model hub page
using their model name.
Example usage with /Transformers:
You can use the following BibTeX entry for citation:
# Acknowledgments
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank Merve Noyan for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
|
[
"# 🇹🇷 Turkish ConvBERT model\n\n<p align=\"center\">\n <img alt=\"Logo provided by Merve Noyan\" title=\"Awesome logo from Merve Noyan\" src=\"URL\n</p>\n\n ConvBERT model on the recently released Turkish part of the\nmultiligual C4 (mC4) corpus from the AI2 team.\n\nAfter filtering documents with a broken encoding, the training corpus has a size of 242GB resulting\nin 31,240,963,926 tokens.\n\nWe used the original 32k vocab (instead of creating a new one).",
"# mC4 ConvBERT\n\nIn addition to the ELECTRA base model, we also trained an ConvBERT model on the Turkish part of the mC4 corpus. We use a\nsequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.",
"# Model usage\n\nAll trained models can be used from the DBMDZ Hugging Face model hub page\nusing their model name.\n\nExample usage with /Transformers:\n\n\n\nYou can use the following BibTeX entry for citation:",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nWe would like to thank Merve Noyan for the\nawesome logo!\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #convbert #fill-mask #tr #dataset-allenai/c4 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# 🇹🇷 Turkish ConvBERT model\n\n<p align=\"center\">\n <img alt=\"Logo provided by Merve Noyan\" title=\"Awesome logo from Merve Noyan\" src=\"URL\n</p>\n\n ConvBERT model on the recently released Turkish part of the\nmultiligual C4 (mC4) corpus from the AI2 team.\n\nAfter filtering documents with a broken encoding, the training corpus has a size of 242GB resulting\nin 31,240,963,926 tokens.\n\nWe used the original 32k vocab (instead of creating a new one).",
"# mC4 ConvBERT\n\nIn addition to the ELECTRA base model, we also trained an ConvBERT model on the Turkish part of the mC4 corpus. We use a\nsequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.",
"# Model usage\n\nAll trained models can be used from the DBMDZ Hugging Face model hub page\nusing their model name.\n\nExample usage with /Transformers:\n\n\n\nYou can use the following BibTeX entry for citation:",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nWe would like to thank Merve Noyan for the\nawesome logo!\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz DistilBERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a German Europeana DistilBERT model 🎉
# German Europeana DistilBERT
We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/)
that were provided by *The European Library*. The final
training corpus has a size of 51GB and consists of 8,035,986,369 tokens.
Detailed information about the data and pretraining steps can be found in
[this repository](https://github.com/stefan-it/europeana-bert).
## Results
For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
## Usage
With Transformers >= 4.3 our German Europeana DistilBERT model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/distilbert-base-german-europeana-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
# Huggingface model hub
All other German Europeana models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion
[here](https://github.com/stefan-it/europeana-bert/discussions) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "de", "license": "mit", "tags": ["historic german"]}
|
dbmdz/distilbert-base-german-europeana-cased
| null |
[
"transformers",
"pytorch",
"tf",
"distilbert",
"historic german",
"de",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #distilbert #historic german #de #license-mit #endpoints_compatible #region-us
|
# + dbmdz DistilBERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a German Europeana DistilBERT model
# German Europeana DistilBERT
We use the open source Europeana newspapers
that were provided by *The European Library*. The final
training corpus has a size of 51GB and consists of 8,035,986,369 tokens.
Detailed information about the data and pretraining steps can be found in
this repository.
## Results
For results on Historic NER, please refer to this repository.
## Usage
With Transformers >= 4.3 our German Europeana DistilBERT model can be loaded like:
# Huggingface model hub
All other German Europeana models are available on the Huggingface model hub.
# Contact (Bugs, Feedback, Contribution and more)
For questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion
here
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[
"# + dbmdz DistilBERT model\n\nIn this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\nLibrary open sources a German Europeana DistilBERT model",
"# German Europeana DistilBERT\n\nWe use the open source Europeana newspapers\nthat were provided by *The European Library*. The final\ntraining corpus has a size of 51GB and consists of 8,035,986,369 tokens.\n\nDetailed information about the data and pretraining steps can be found in\nthis repository.",
"## Results\n\nFor results on Historic NER, please refer to this repository.",
"## Usage\n\nWith Transformers >= 4.3 our German Europeana DistilBERT model can be loaded like:",
"# Huggingface model hub\n\nAll other German Europeana models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #tf #distilbert #historic german #de #license-mit #endpoints_compatible #region-us \n",
"# + dbmdz DistilBERT model\n\nIn this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\nLibrary open sources a German Europeana DistilBERT model",
"# German Europeana DistilBERT\n\nWe use the open source Europeana newspapers\nthat were provided by *The European Library*. The final\ntraining corpus has a size of 51GB and consists of 8,035,986,369 tokens.\n\nDetailed information about the data and pretraining steps can be found in\nthis repository.",
"## Results\n\nFor results on Historic NER, please refer to this repository.",
"## Usage\n\nWith Transformers >= 4.3 our German Europeana DistilBERT model can be loaded like:",
"# Huggingface model hub\n\nAll other German Europeana models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download both cased and uncased models from their S3 storage"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz Distilled Turkish BERT model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a (cased) distilled model for Turkish 🎉
# 🇹🇷 DistilBERTurk
DistilBERTurk is a community-driven cased distilled BERT model for Turkish.
DistilBERTurk was trained on 7GB of the original training data that was used
for training [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master#stats),
using the cased version of BERTurk as teacher model.
*DistilBERTurk* was trained with the official Hugging Face implementation from
[here](https://github.com/huggingface/transformers/tree/master/examples/distillation)
for 5 days on 4 RTX 2080 TI.
More details about distillation can be found in the
["DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"](https://arxiv.org/abs/1910.01108)
paper by Sanh et al. (2019).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue in the [BERTurk](https://github.com/stefan-it/turkish-bert) repository!
| Model | Downloads
| --------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/distilbert-base-turkish-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/distilbert-base-turkish-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/distilbert-base-turkish-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/distilbert-base-turkish-cased/vocab.txt)
## Usage
With Transformers >= 2.3 our DistilBERTurk model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/distilbert-base-turkish-cased")
model = AutoModel.from_pretrained("dbmdz/distilbert-base-turkish-cased")
```
## Results
For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert).
For PoS tagging, DistilBERTurk outperforms the 24-layer XLM-RoBERTa model.
The overall performance difference between DistilBERTurk and the original
(teacher) BERTurk model is ~1.18%.
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "tr", "license": "mit"}
|
dbmdz/distilbert-base-turkish-cased
| null |
[
"transformers",
"pytorch",
"tf",
"distilbert",
"tr",
"arxiv:1910.01108",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1910.01108"
] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #distilbert #tr #arxiv-1910.01108 #license-mit #endpoints_compatible #has_space #region-us
|
+ dbmdz Distilled Turkish BERT model
====================================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a (cased) distilled model for Turkish
🇹🇷 DistilBERTurk
================
DistilBERTurk is a community-driven cased distilled BERT model for Turkish.
DistilBERTurk was trained on 7GB of the original training data that was used
for training BERTurk,
using the cased version of BERTurk as teacher model.
*DistilBERTurk* was trained with the official Hugging Face implementation from
here
for 5 days on 4 RTX 2080 TI.
More details about distillation can be found in the
"DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"
paper by Sanh et al. (2019).
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue in the BERTurk repository!
Usage
-----
With Transformers >= 2.3 our DistilBERTurk model can be loaded like:
Results
-------
For results on PoS tagging or NER tasks, please refer to
this repository.
For PoS tagging, DistilBERTurk outperforms the 24-layer XLM-RoBERTa model.
The overall performance difference between DistilBERTurk and the original
(teacher) BERTurk model is ~1.18%.
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT models just open an issue
here
Acknowledgments
===============
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #distilbert #tr #arxiv-1910.01108 #license-mit #endpoints_compatible #has_space #region-us \n"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources French Europeana ELECTRA models 🎉
# French Europeana ELECTRA
We extracted all French texts using the `language` metadata attribute from the Europeana corpus.
The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.
Based on the metadata information, texts from the 18th - 20th century are mainly included in the
training corpus.
Detailed information about the data and pretraining steps can be found in
[this repository](https://github.com/stefan-it/europeana-bert).
## Model weights
ELECTRA model weights for PyTorch and TensorFlow are available.
* French Europeana ELECTRA (discriminator): `dbmdz/electra-base-french-europeana-cased-discriminator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-discriminator/tree/main)
* French Europeana ELECTRA (generator): `dbmdz/electra-base-french-europeana-cased-generator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-generator/tree/main)
## Results
For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
## Usage
With Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator")
model = AutoModel.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator")
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download our models from their S3 storage 🤗
|
{"language": "fr", "license": "mit", "tags": ["historic french"]}
|
dbmdz/electra-base-french-europeana-cased-discriminator
| null |
[
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"historic french",
"fr",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #tf #electra #pretraining #historic french #fr #license-mit #endpoints_compatible #region-us
|
# + dbmdz ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources French Europeana ELECTRA models
# French Europeana ELECTRA
We extracted all French texts using the 'language' metadata attribute from the Europeana corpus.
The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.
Based on the metadata information, texts from the 18th - 20th century are mainly included in the
training corpus.
Detailed information about the data and pretraining steps can be found in
this repository.
## Model weights
ELECTRA model weights for PyTorch and TensorFlow are available.
* French Europeana ELECTRA (discriminator): 'dbmdz/electra-base-french-europeana-cased-discriminator' - model hub page
* French Europeana ELECTRA (generator): 'dbmdz/electra-base-french-europeana-cased-generator' - model hub page
## Results
For results on Historic NER, please refer to this repository.
## Usage
With Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like:
# Huggingface model hub
All models are available on the Huggingface model hub.
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
here
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download our models from their S3 storage
|
[
"# + dbmdz ELECTRA models\n\nIn this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\nLibrary open sources French Europeana ELECTRA models",
"# French Europeana ELECTRA\n\nWe extracted all French texts using the 'language' metadata attribute from the Europeana corpus.\n\nThe resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.\n\nBased on the metadata information, texts from the 18th - 20th century are mainly included in the\ntraining corpus.\n\nDetailed information about the data and pretraining steps can be found in\nthis repository.",
"## Model weights\n\nELECTRA model weights for PyTorch and TensorFlow are available.\n\n* French Europeana ELECTRA (discriminator): 'dbmdz/electra-base-french-europeana-cased-discriminator' - model hub page\n* French Europeana ELECTRA (generator): 'dbmdz/electra-base-french-europeana-cased-generator' - model hub page",
"## Results\n\nFor results on Historic NER, please refer to this repository.",
"## Usage\n\nWith Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like:",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our ELECTRA models just open an issue\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download our models from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #tf #electra #pretraining #historic french #fr #license-mit #endpoints_compatible #region-us \n",
"# + dbmdz ELECTRA models\n\nIn this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\nLibrary open sources French Europeana ELECTRA models",
"# French Europeana ELECTRA\n\nWe extracted all French texts using the 'language' metadata attribute from the Europeana corpus.\n\nThe resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.\n\nBased on the metadata information, texts from the 18th - 20th century are mainly included in the\ntraining corpus.\n\nDetailed information about the data and pretraining steps can be found in\nthis repository.",
"## Model weights\n\nELECTRA model weights for PyTorch and TensorFlow are available.\n\n* French Europeana ELECTRA (discriminator): 'dbmdz/electra-base-french-europeana-cased-discriminator' - model hub page\n* French Europeana ELECTRA (generator): 'dbmdz/electra-base-french-europeana-cased-generator' - model hub page",
"## Results\n\nFor results on Historic NER, please refer to this repository.",
"## Usage\n\nWith Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like:",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our ELECTRA models just open an issue\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download our models from their S3 storage"
] |
fill-mask
|
transformers
|
# 🤗 + 📚 dbmdz ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources French Europeana ELECTRA models 🎉
# French Europeana ELECTRA
We extracted all French texts using the `language` metadata attribute from the Europeana corpus.
The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.
Based on the metadata information, texts from the 18th - 20th century are mainly included in the
training corpus.
Detailed information about the data and pretraining steps can be found in
[this repository](https://github.com/stefan-it/europeana-bert).
## Model weights
ELECTRA model weights for PyTorch and TensorFlow are available.
* French Europeana ELECTRA (discriminator): `dbmdz/electra-base-french-europeana-cased-discriminator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-discriminator/tree/main)
* French Europeana ELECTRA (generator): `dbmdz/electra-base-french-europeana-cased-generator` - [model hub page](https://huggingface.co/dbmdz/electra-base-french-europeana-cased-generator/tree/main)
## Results
For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert).
## Usage
With Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator")
model = AutoModel.from_pretrained("dbmdz/electra-base-french-europeana-cased-discriminator")
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download our models from their S3 storage 🤗
|
{"language": "fr", "license": "mit", "tags": ["historic french"]}
|
dbmdz/electra-base-french-europeana-cased-generator
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"electra",
"fill-mask",
"historic french",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #tf #safetensors #electra #fill-mask #historic french #fr #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# + dbmdz ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources French Europeana ELECTRA models
# French Europeana ELECTRA
We extracted all French texts using the 'language' metadata attribute from the Europeana corpus.
The resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.
Based on the metadata information, texts from the 18th - 20th century are mainly included in the
training corpus.
Detailed information about the data and pretraining steps can be found in
this repository.
## Model weights
ELECTRA model weights for PyTorch and TensorFlow are available.
* French Europeana ELECTRA (discriminator): 'dbmdz/electra-base-french-europeana-cased-discriminator' - model hub page
* French Europeana ELECTRA (generator): 'dbmdz/electra-base-french-europeana-cased-generator' - model hub page
## Results
For results on Historic NER, please refer to this repository.
## Usage
With Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like:
# Huggingface model hub
All models are available on the Huggingface model hub.
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
here
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download our models from their S3 storage
|
[
"# + dbmdz ELECTRA models\n\nIn this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\nLibrary open sources French Europeana ELECTRA models",
"# French Europeana ELECTRA\n\nWe extracted all French texts using the 'language' metadata attribute from the Europeana corpus.\n\nThe resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.\n\nBased on the metadata information, texts from the 18th - 20th century are mainly included in the\ntraining corpus.\n\nDetailed information about the data and pretraining steps can be found in\nthis repository.",
"## Model weights\n\nELECTRA model weights for PyTorch and TensorFlow are available.\n\n* French Europeana ELECTRA (discriminator): 'dbmdz/electra-base-french-europeana-cased-discriminator' - model hub page\n* French Europeana ELECTRA (generator): 'dbmdz/electra-base-french-europeana-cased-generator' - model hub page",
"## Results\n\nFor results on Historic NER, please refer to this repository.",
"## Usage\n\nWith Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like:",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our ELECTRA models just open an issue\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download our models from their S3 storage"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #electra #fill-mask #historic french #fr #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# + dbmdz ELECTRA models\n\nIn this repository the MDZ Digital Library team (dbmdz) at the Bavarian State\nLibrary open sources French Europeana ELECTRA models",
"# French Europeana ELECTRA\n\nWe extracted all French texts using the 'language' metadata attribute from the Europeana corpus.\n\nThe resulting corpus has a size of 63GB and consists of 11,052,528,456 tokens.\n\nBased on the metadata information, texts from the 18th - 20th century are mainly included in the\ntraining corpus.\n\nDetailed information about the data and pretraining steps can be found in\nthis repository.",
"## Model weights\n\nELECTRA model weights for PyTorch and TensorFlow are available.\n\n* French Europeana ELECTRA (discriminator): 'dbmdz/electra-base-french-europeana-cased-discriminator' - model hub page\n* French Europeana ELECTRA (generator): 'dbmdz/electra-base-french-europeana-cased-generator' - model hub page",
"## Results\n\nFor results on Historic NER, please refer to this repository.",
"## Usage\n\nWith Transformers >= 2.3 our French Europeana ELECTRA model can be loaded like:",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub.",
"# Contact (Bugs, Feedback, Contribution and more)\n\nFor questions about our ELECTRA models just open an issue\nhere",
"# Acknowledgments\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️\n\nThanks to the generous support from the Hugging Face team,\nit is possible to download our models from their S3 storage"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz BERT and ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models 🎉
# Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in `config.json`. However, the model is working and all
evaluations were done under those circumstances.
See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/italian-bertelectra).
## Usage
With Transformers >= 2.3 our Italian BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the (recommended) Italian XXL BERT models, just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-xxl-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the Italian XXL ELECTRA model (discriminator), just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT/ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "it", "license": "mit", "datasets": ["wikipedia"]}
|
dbmdz/electra-base-italian-xxl-cased-discriminator
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"it",
"dataset:wikipedia",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"it"
] |
TAGS
#transformers #pytorch #electra #pretraining #it #dataset-wikipedia #license-mit #endpoints_compatible #has_space #region-us
|
+ dbmdz BERT and ELECTRA models
===============================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models
Italian BERT
============
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the OPUS corpora collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the OSCAR corpus.
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in 'URL'. However, the model is working and all
evaluations were done under those circumstances.
See this issue for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
BERTurk.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Results
-------
For results on downstream tasks like NER or PoS tagging, please refer to
this repository.
Usage
-----
With Transformers >= 2.3 our Italian BERT models can be loaded like:
To load the (recommended) Italian XXL BERT models, just use:
To load the Italian XXL ELECTRA model (discriminator), just use:
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT/ELECTRA models just open an issue
here
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #it #dataset-wikipedia #license-mit #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
# 🤗 + 📚 dbmdz BERT and ELECTRA models
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models 🎉
# Italian BERT
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in `config.json`. However, the model is working and all
evaluations were done under those circumstances.
See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
[BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
## Model weights
Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
| Model | Downloads
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
| `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
| `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
| `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
## Results
For results on downstream tasks like NER or PoS tagging, please refer to
[this repository](https://github.com/stefan-it/italian-bertelectra).
## Usage
With Transformers >= 2.3 our Italian BERT models can be loaded like:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the (recommended) Italian XXL BERT models, just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/bert-base-italian-xxl-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
To load the Italian XXL ELECTRA model (discriminator), just use:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our BERT/ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "it", "license": "mit", "datasets": ["wikipedia"]}
|
dbmdz/electra-base-italian-xxl-cased-generator
| null |
[
"transformers",
"pytorch",
"safetensors",
"electra",
"fill-mask",
"it",
"dataset:wikipedia",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"it"
] |
TAGS
#transformers #pytorch #safetensors #electra #fill-mask #it #dataset-wikipedia #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
+ dbmdz BERT and ELECTRA models
===============================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources Italian BERT and ELECTRA models
Italian BERT
============
The source data for the Italian BERT model consists of a recent Wikipedia dump and
various texts from the OPUS corpora collection. The final
training corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).
Our cased and uncased models are training with an initial sequence length of 512
subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extend
it with data from the Italian part of the OSCAR corpus.
Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.
This explains the mismatch of the "real" vocab size of 31102, compared to the
vocab size specified in 'URL'. However, the model is working and all
evaluations were done under those circumstances.
See this issue for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
size of 128. We pretty much following the ELECTRA training procedure as used for
BERTurk.
Model weights
-------------
Currently only PyTorch-Transformers
compatible weights are available. If you need access to TensorFlow checkpoints,
please raise an issue!
Results
-------
For results on downstream tasks like NER or PoS tagging, please refer to
this repository.
Usage
-----
With Transformers >= 2.3 our Italian BERT models can be loaded like:
To load the (recommended) Italian XXL BERT models, just use:
To load the Italian XXL ELECTRA model (discriminator), just use:
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our BERT/ELECTRA models just open an issue
here
Acknowledgments
===============
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #electra #fill-mask #it #dataset-wikipedia #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz Turkish ELECTRA model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased ELECTRA base model for Turkish 🎉
# Turkish ELECTRA model
We release a base ELEC**TR**A model for Turkish, that was trained on the same data as *BERTurk*.
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-8 for 1M steps.
## Model weights
[Transformers](https://github.com/huggingface/transformers)
compatible weights for both PyTorch and TensorFlow are available.
| Model | Downloads
| ------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/electra-base-turkish-cased-discriminator` | [`config.json`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-turkish-cased-discriminator/vocab.txt)
## Usage
With Transformers >= 2.8 our ELECTRA base cased model can be loaded like:
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator")
model = AutoModelWithLMHead.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator")
```
## Results
For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert/electra).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "tr", "license": "mit"}
|
dbmdz/electra-base-turkish-cased-discriminator
| null |
[
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"tr",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #electra #pretraining #tr #license-mit #endpoints_compatible #region-us
|
+ dbmdz Turkish ELECTRA model
=============================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased ELECTRA base model for Turkish
Turkish ELECTRA model
=====================
We release a base ELECTRA model for Turkish, that was trained on the same data as *BERTurk*.
>
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
>
>
>
More details about ELECTRA can be found in the ICLR paper
or in the official ELECTRA repository on GitHub.
Stats
-----
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish OSCAR corpus,
a recent Wikipedia dump, various OPUS corpora and a
special corpus provided by Kemal Oflazer.
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-8 for 1M steps.
Model weights
-------------
Transformers
compatible weights for both PyTorch and TensorFlow are available.
Usage
-----
With Transformers >= 2.8 our ELECTRA base cased model can be loaded like:
Results
-------
For results on PoS tagging or NER tasks, please refer to
this repository.
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our ELECTRA models just open an issue
here
Acknowledgments
===============
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #electra #pretraining #tr #license-mit #endpoints_compatible #region-us \n"
] |
null |
transformers
|
# 🇹🇷 Turkish ELECTRA model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png">
</p>
[](https://zenodo.org/badge/latestdoi/237817454)
We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
# Stats
We've also trained an ELECTRA (cased) model on the recently released Turkish part of the
[multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ELECTRA
In addition to the ELEC**TR**A base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz)
using their model name.
Example usage with 🤗/Transformers:
```python
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-discriminator")
model = AutoModel.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-discriminator")
```
# Citation
You can use the following BibTeX entry for citation:
```bibtex
@software{stefan_schweter_2020_3770924,
author = {Stefan Schweter},
title = {BERTurk - BERT models for Turkish},
month = apr,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3770924},
url = {https://doi.org/10.5281/zenodo.3770924}
}
```
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
|
{"language": "tr", "license": "mit", "datasets": ["allenai/c4"]}
|
dbmdz/electra-base-turkish-mc4-cased-discriminator
| null |
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"electra",
"pretraining",
"tr",
"dataset:allenai/c4",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #tensorboard #electra #pretraining #tr #dataset-allenai/c4 #license-mit #endpoints_compatible #region-us
|
# 🇹🇷 Turkish ELECTRA model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="URL
</p>
 model on the recently released Turkish part of the
multiligual C4 (mC4) corpus from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ELECTRA
In addition to the ELECTRA base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the DBMDZ Hugging Face model hub page
using their model name.
Example usage with /Transformers:
You can use the following BibTeX entry for citation:
# Acknowledgments
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank Merve Noyan for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
|
[
"# 🇹🇷 Turkish ELECTRA model\n\n<p align=\"center\">\n <img alt=\"Logo provided by Merve Noyan\" title=\"Awesome logo from Merve Noyan\" src=\"URL\n</p>\n\n model on the recently released Turkish part of the\nmultiligual C4 (mC4) corpus from the AI2 team.\n\nAfter filtering documents with a broken encoding, the training corpus has a size of 242GB resulting\nin 31,240,963,926 tokens.\n\nWe used the original 32k vocab (instead of creating a new one).",
"# mC4 ELECTRA\n\nIn addition to the ELECTRA base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a\nsequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.",
"# Model usage\n\nAll trained models can be used from the DBMDZ Hugging Face model hub page\nusing their model name.\n\nExample usage with /Transformers:\n\n\n\nYou can use the following BibTeX entry for citation:",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nWe would like to thank Merve Noyan for the\nawesome logo!\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️"
] |
[
"TAGS\n#transformers #pytorch #tf #tensorboard #electra #pretraining #tr #dataset-allenai/c4 #license-mit #endpoints_compatible #region-us \n",
"# 🇹🇷 Turkish ELECTRA model\n\n<p align=\"center\">\n <img alt=\"Logo provided by Merve Noyan\" title=\"Awesome logo from Merve Noyan\" src=\"URL\n</p>\n\n model on the recently released Turkish part of the\nmultiligual C4 (mC4) corpus from the AI2 team.\n\nAfter filtering documents with a broken encoding, the training corpus has a size of 242GB resulting\nin 31,240,963,926 tokens.\n\nWe used the original 32k vocab (instead of creating a new one).",
"# mC4 ELECTRA\n\nIn addition to the ELECTRA base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a\nsequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.",
"# Model usage\n\nAll trained models can be used from the DBMDZ Hugging Face model hub page\nusing their model name.\n\nExample usage with /Transformers:\n\n\n\nYou can use the following BibTeX entry for citation:",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nWe would like to thank Merve Noyan for the\nawesome logo!\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️"
] |
fill-mask
|
transformers
|
# 🇹🇷 Turkish ELECTRA model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png">
</p>
[](https://zenodo.org/badge/latestdoi/237817454)
We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
# Stats
We've also trained an ELECTRA (cased) model on the recently released Turkish part of the
[multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ELECTRA
In addition to the ELEC**TR**A base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz)
using their model name.
Example usage with 🤗/Transformers:
```python
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-generator")
model = AutoModel.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-generator")
```
# Citation
You can use the following BibTeX entry for citation:
```bibtex
@software{stefan_schweter_2020_3770924,
author = {Stefan Schweter},
title = {BERTurk - BERT models for Turkish},
month = apr,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3770924},
url = {https://doi.org/10.5281/zenodo.3770924}
}
```
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
|
{"language": "tr", "license": "mit", "datasets": ["allenai/c4"], "widget": [{"text": "[MASK] s\u00f6zc\u00fc\u011f\u00fc T\u00fcrk\u00e7e k\u00f6kenlidir"}]}
|
dbmdz/electra-base-turkish-mc4-cased-generator
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"electra",
"fill-mask",
"tr",
"dataset:allenai/c4",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #safetensors #electra #fill-mask #tr #dataset-allenai/c4 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# 🇹🇷 Turkish ELECTRA model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="URL
</p>
 model on the recently released Turkish part of the
multiligual C4 (mC4) corpus from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ELECTRA
In addition to the ELECTRA base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the DBMDZ Hugging Face model hub page
using their model name.
Example usage with /Transformers:
You can use the following BibTeX entry for citation:
# Acknowledgments
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank Merve Noyan for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
|
[
"# 🇹🇷 Turkish ELECTRA model\n\n<p align=\"center\">\n <img alt=\"Logo provided by Merve Noyan\" title=\"Awesome logo from Merve Noyan\" src=\"URL\n</p>\n\n model on the recently released Turkish part of the\nmultiligual C4 (mC4) corpus from the AI2 team.\n\nAfter filtering documents with a broken encoding, the training corpus has a size of 242GB resulting\nin 31,240,963,926 tokens.\n\nWe used the original 32k vocab (instead of creating a new one).",
"# mC4 ELECTRA\n\nIn addition to the ELECTRA base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a\nsequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.",
"# Model usage\n\nAll trained models can be used from the DBMDZ Hugging Face model hub page\nusing their model name.\n\nExample usage with /Transformers:\n\n\n\nYou can use the following BibTeX entry for citation:",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nWe would like to thank Merve Noyan for the\nawesome logo!\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #electra #fill-mask #tr #dataset-allenai/c4 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# 🇹🇷 Turkish ELECTRA model\n\n<p align=\"center\">\n <img alt=\"Logo provided by Merve Noyan\" title=\"Awesome logo from Merve Noyan\" src=\"URL\n</p>\n\n model on the recently released Turkish part of the\nmultiligual C4 (mC4) corpus from the AI2 team.\n\nAfter filtering documents with a broken encoding, the training corpus has a size of 242GB resulting\nin 31,240,963,926 tokens.\n\nWe used the original 32k vocab (instead of creating a new one).",
"# mC4 ELECTRA\n\nIn addition to the ELECTRA base model, we also trained an ELECTRA model on the Turkish part of the mC4 corpus. We use a\nsequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.",
"# Model usage\n\nAll trained models can be used from the DBMDZ Hugging Face model hub page\nusing their model name.\n\nExample usage with /Transformers:\n\n\n\nYou can use the following BibTeX entry for citation:",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nWe would like to thank Merve Noyan for the\nawesome logo!\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️"
] |
null |
transformers
|
# 🇹🇷 Turkish ELECTRA model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png">
</p>
[](https://zenodo.org/badge/latestdoi/237817454)
We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
# Stats
We've also trained an ELECTRA (uncased) model on the recently released Turkish part of the
[multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ELECTRA
In addition to the ELEC**TR**A base cased model, we also trained an ELECTRA uncased model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz)
using their model name.
Example usage with 🤗/Transformers:
```python
tokenizer = AutoTokenizer.from_pretrained("electra-base-turkish-mc4-uncased-discriminator")
model = AutoModel.from_pretrained("electra-base-turkish-mc4-uncased-discriminator")
```
# Citation
You can use the following BibTeX entry for citation:
```bibtex
@software{stefan_schweter_2020_3770924,
author = {Stefan Schweter},
title = {BERTurk - BERT models for Turkish},
month = apr,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3770924},
url = {https://doi.org/10.5281/zenodo.3770924}
}
```
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
|
{"language": "tr", "license": "mit", "datasets": ["allenai/c4"]}
|
dbmdz/electra-base-turkish-mc4-uncased-discriminator
| null |
[
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"tr",
"dataset:allenai/c4",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #electra #pretraining #tr #dataset-allenai/c4 #license-mit #endpoints_compatible #region-us
|
# 🇹🇷 Turkish ELECTRA model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="URL
</p>
 model on the recently released Turkish part of the
multiligual C4 (mC4) corpus from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ELECTRA
In addition to the ELECTRA base cased model, we also trained an ELECTRA uncased model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the DBMDZ Hugging Face model hub page
using their model name.
Example usage with /Transformers:
You can use the following BibTeX entry for citation:
# Acknowledgments
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank Merve Noyan for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
|
[
"# 🇹🇷 Turkish ELECTRA model\n\n<p align=\"center\">\n <img alt=\"Logo provided by Merve Noyan\" title=\"Awesome logo from Merve Noyan\" src=\"URL\n</p>\n\n model on the recently released Turkish part of the\nmultiligual C4 (mC4) corpus from the AI2 team.\n\nAfter filtering documents with a broken encoding, the training corpus has a size of 242GB resulting\nin 31,240,963,926 tokens.\n\nWe used the original 32k vocab (instead of creating a new one).",
"# mC4 ELECTRA\n\nIn addition to the ELECTRA base cased model, we also trained an ELECTRA uncased model on the Turkish part of the mC4 corpus. We use a\nsequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.",
"# Model usage\n\nAll trained models can be used from the DBMDZ Hugging Face model hub page\nusing their model name.\n\nExample usage with /Transformers:\n\n\n\nYou can use the following BibTeX entry for citation:",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nWe would like to thank Merve Noyan for the\nawesome logo!\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️"
] |
[
"TAGS\n#transformers #pytorch #tf #electra #pretraining #tr #dataset-allenai/c4 #license-mit #endpoints_compatible #region-us \n",
"# 🇹🇷 Turkish ELECTRA model\n\n<p align=\"center\">\n <img alt=\"Logo provided by Merve Noyan\" title=\"Awesome logo from Merve Noyan\" src=\"URL\n</p>\n\n model on the recently released Turkish part of the\nmultiligual C4 (mC4) corpus from the AI2 team.\n\nAfter filtering documents with a broken encoding, the training corpus has a size of 242GB resulting\nin 31,240,963,926 tokens.\n\nWe used the original 32k vocab (instead of creating a new one).",
"# mC4 ELECTRA\n\nIn addition to the ELECTRA base cased model, we also trained an ELECTRA uncased model on the Turkish part of the mC4 corpus. We use a\nsequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.",
"# Model usage\n\nAll trained models can be used from the DBMDZ Hugging Face model hub page\nusing their model name.\n\nExample usage with /Transformers:\n\n\n\nYou can use the following BibTeX entry for citation:",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nWe would like to thank Merve Noyan for the\nawesome logo!\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️"
] |
fill-mask
|
transformers
|
# 🇹🇷 Turkish ELECTRA model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="https://raw.githubusercontent.com/stefan-it/turkish-bert/master/merve_logo.png">
</p>
[](https://zenodo.org/badge/latestdoi/237817454)
We present community-driven BERT, DistilBERT, ELECTRA and ConvBERT models for Turkish 🎉
Some datasets used for pretraining and evaluation are contributed from the
awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
Logo is provided by [Merve Noyan](https://twitter.com/mervenoyann).
# Stats
We've also trained an ELECTRA (uncased) model on the recently released Turkish part of the
[multiligual C4 (mC4) corpus](https://github.com/allenai/allennlp/discussions/5265) from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ELECTRA
In addition to the ELEC**TR**A base cased model, we also trained an ELECTRA uncased model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the [DBMDZ](https://github.com/dbmdz) Hugging Face [model hub page](https://huggingface.co/dbmdz)
using their model name.
Example usage with 🤗/Transformers:
```python
tokenizer = AutoTokenizer.from_pretrained("electra-base-turkish-mc4-uncased-generator")
model = AutoModel.from_pretrained("electra-base-turkish-mc4-uncased-generator")
```
# Citation
You can use the following BibTeX entry for citation:
```bibtex
@software{stefan_schweter_2020_3770924,
author = {Stefan Schweter},
title = {BERTurk - BERT models for Turkish},
month = apr,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.3770924},
url = {https://doi.org/10.5281/zenodo.3770924}
}
```
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank [Merve Noyan](https://twitter.com/mervenoyann) for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
|
{"language": "tr", "license": "mit", "datasets": ["allenai/c4"]}
|
dbmdz/electra-base-turkish-mc4-uncased-generator
| null |
[
"transformers",
"pytorch",
"tf",
"electra",
"fill-mask",
"tr",
"dataset:allenai/c4",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #electra #fill-mask #tr #dataset-allenai/c4 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# 🇹🇷 Turkish ELECTRA model
<p align="center">
<img alt="Logo provided by Merve Noyan" title="Awesome logo from Merve Noyan" src="URL
</p>
 model on the recently released Turkish part of the
multiligual C4 (mC4) corpus from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting
in 31,240,963,926 tokens.
We used the original 32k vocab (instead of creating a new one).
# mC4 ELECTRA
In addition to the ELECTRA base cased model, we also trained an ELECTRA uncased model on the Turkish part of the mC4 corpus. We use a
sequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.
# Model usage
All trained models can be used from the DBMDZ Hugging Face model hub page
using their model name.
Example usage with /Transformers:
You can use the following BibTeX entry for citation:
# Acknowledgments
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
We would like to thank Merve Noyan for the
awesome logo!
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
|
[
"# 🇹🇷 Turkish ELECTRA model\n\n<p align=\"center\">\n <img alt=\"Logo provided by Merve Noyan\" title=\"Awesome logo from Merve Noyan\" src=\"URL\n</p>\n\n model on the recently released Turkish part of the\nmultiligual C4 (mC4) corpus from the AI2 team.\n\nAfter filtering documents with a broken encoding, the training corpus has a size of 242GB resulting\nin 31,240,963,926 tokens.\n\nWe used the original 32k vocab (instead of creating a new one).",
"# mC4 ELECTRA\n\nIn addition to the ELECTRA base cased model, we also trained an ELECTRA uncased model on the Turkish part of the mC4 corpus. We use a\nsequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.",
"# Model usage\n\nAll trained models can be used from the DBMDZ Hugging Face model hub page\nusing their model name.\n\nExample usage with /Transformers:\n\n\n\nYou can use the following BibTeX entry for citation:",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nWe would like to thank Merve Noyan for the\nawesome logo!\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️"
] |
[
"TAGS\n#transformers #pytorch #tf #electra #fill-mask #tr #dataset-allenai/c4 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# 🇹🇷 Turkish ELECTRA model\n\n<p align=\"center\">\n <img alt=\"Logo provided by Merve Noyan\" title=\"Awesome logo from Merve Noyan\" src=\"URL\n</p>\n\n model on the recently released Turkish part of the\nmultiligual C4 (mC4) corpus from the AI2 team.\n\nAfter filtering documents with a broken encoding, the training corpus has a size of 242GB resulting\nin 31,240,963,926 tokens.\n\nWe used the original 32k vocab (instead of creating a new one).",
"# mC4 ELECTRA\n\nIn addition to the ELECTRA base cased model, we also trained an ELECTRA uncased model on the Turkish part of the mC4 corpus. We use a\nsequence length of 512 over the full training time and train the model for 1M steps on a v3-32 TPU.",
"# Model usage\n\nAll trained models can be used from the DBMDZ Hugging Face model hub page\nusing their model name.\n\nExample usage with /Transformers:\n\n\n\nYou can use the following BibTeX entry for citation:",
"# Acknowledgments\n\nThanks to Kemal Oflazer for providing us\nadditional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing\nus the Turkish NER dataset for evaluation.\n\nWe would like to thank Merve Noyan for the\nawesome logo!\n\nResearch supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).\nThanks for providing access to the TFRC ️"
] |
null |
transformers
|
# 🤗 + 📚 dbmdz Turkish ELECTRA model
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased ELECTRA small model for Turkish 🎉
# Turkish ELECTRA model
We release a small ELEC**TR**A model for Turkish, that was trained on the same data as *BERTurk*.
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish [OSCAR corpus](https://traces1.inria.fr/oscar/),
a recent Wikipedia dump, various [OPUS corpora](http://opus.nlpl.eu/) and a
special corpus provided by [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/).
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-8 for 1M steps.
## Model weights
[Transformers](https://github.com/huggingface/transformers)
compatible weights for both PyTorch and TensorFlow are available.
| Model | Downloads
| ------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
| `dbmdz/electra-small-turkish-cased-discriminator` | [`config.json`](https://cdn.huggingface.co/dbmdz/electra-small-turkish-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-small-turkish-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-small-turkish-cased-discriminator/vocab.txt)
## Usage
With Transformers >= 2.8 our ELECTRA small cased model can be loaded like:
```python
from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-small-turkish-cased-discriminator")
model = AutoModelWithLMHead.from_pretrained("dbmdz/electra-small-turkish-cased-discriminator")
```
## Results
For results on PoS tagging or NER tasks, please refer to
[this repository](https://github.com/stefan-it/turkish-bert/electra).
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
# Contact (Bugs, Feedback, Contribution and more)
For questions about our ELECTRA models just open an issue
[here](https://github.com/dbmdz/berts/issues/new) 🤗
# Acknowledgments
Thanks to [Kemal Oflazer](http://www.andrew.cmu.edu/user/ko/) for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ❤️
Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
it is possible to download both cased and uncased models from their S3 storage 🤗
|
{"language": "tr", "license": "mit"}
|
dbmdz/electra-small-turkish-cased-discriminator
| null |
[
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"tr",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tf #electra #pretraining #tr #license-mit #endpoints_compatible #region-us
|
+ dbmdz Turkish ELECTRA model
=============================
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
Library open sources a cased ELECTRA small model for Turkish
Turkish ELECTRA model
=====================
We release a small ELECTRA model for Turkish, that was trained on the same data as *BERTurk*.
>
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
>
>
>
More details about ELECTRA can be found in the ICLR paper
or in the official ELECTRA repository on GitHub.
Stats
-----
The current version of the model is trained on a filtered and sentence
segmented version of the Turkish OSCAR corpus,
a recent Wikipedia dump, various OPUS corpora and a
special corpus provided by Kemal Oflazer.
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Thanks to Google's TensorFlow Research Cloud (TFRC) we could train a cased model
on a TPU v3-8 for 1M steps.
Model weights
-------------
Transformers
compatible weights for both PyTorch and TensorFlow are available.
Usage
-----
With Transformers >= 2.8 our ELECTRA small cased model can be loaded like:
Results
-------
For results on PoS tagging or NER tasks, please refer to
this repository.
Huggingface model hub
=====================
All models are available on the Huggingface model hub.
Contact (Bugs, Feedback, Contribution and more)
===============================================
For questions about our ELECTRA models just open an issue
here
Acknowledgments
===============
Thanks to Kemal Oflazer for providing us
additional large corpora for Turkish. Many thanks to Reyyan Yeniterzi for providing
us the Turkish NER dataset for evaluation.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Thanks for providing access to the TFRC ️
Thanks to the generous support from the Hugging Face team,
it is possible to download both cased and uncased models from their S3 storage
|
[] |
[
"TAGS\n#transformers #pytorch #tf #electra #pretraining #tr #license-mit #endpoints_compatible #region-us \n"
] |
token-classification
|
flair
|
# Triple E - Effective Ensembling of Embeddings and Language Models for NER of Historical German
Based on [our paper](http://ceur-ws.org/Vol-2696/paper_173.pdf) we release a new baseline model for the German
[CLEF-HIPE shared task](https://impresso.github.io/CLEF-HIPE-2020/).
In contrast to the models used in the paper, we manually sentence-segmented and normalize hyphenations and
trained a NER model using the German Europeana BERT model.
Additionally, we perform experiments with different context sizes. This approach is described in
more detail in [this paper](https://arxiv.org/abs/2011.06993).
# Results
The results with different context sizes can be seen in the following table:
| Model | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg.
| -------------------------- | --------------- | --------------- | --------------- | ------------------- | --------------- | ---------------
| German Europeana BERT | (81.45) / 76.92 | (**81.53**) / 77.03 | (80.49) / 77.83 | (80.88) / 77.19 | (81.39) / 77.00 | (81.15 ± 0.45) / 77.19 ± 0.34
| German Europeana BERT (16) | (**82.56**) / 77.38 | (81.19) / 77.76 | (80.99) / 76.34 | (81.27) / 77.70 | (81.28) / 77.22 | (81.46 ± 0.63) / 77.28 ± 0.57
| German Europeana BERT (32) | (**82.04**) / 78.50 | (81.14) / 76.56 | (81.81) / 78.28 | (81.50) / 76.90 | (81.64) / 77.94 | (81.63 ± 0.34) / 77.64 ± 0.86
| German Europeana BERT (64) | (81.21) / 78.39 | (81.27) / 75.98 | (**81.88**) / 78.40 | (81.66) / 77.35 | (81.29) / 76.70 | (81.46 ± 0.29) / 77.36 ± 1.06
| German Europeana BERT (80) | (82.13) / 77.77 | (81.31) / 76.81 | (82.09) / 78.69 | (**82.30**) / 76.79 | (80.65) / 77.10 | (81.70 ± 0.70) / 77.43 ± 0.81
For model upload, we choose the best model on development score: 82.56 with a context length of 16.
## Comparisons
The following figure shows the results with different context sized (on development dataset):

We perform "Almost Stochastic Order" tests as proposed in the
["Deep Dominance - How to Properly Compare Deep Neural Models"](https://www.aclweb.org/anthology/P19-1266/) paper.
The heatmap figure is heavily inspired by the ["CharacterBERT"](https://arxiv.org/abs/2010.10392) paper.

|
{"language": "de", "license": "mit", "tags": ["flair", "token-classification", "sequence-tagger-model"], "widget": [{"text": "Herr Oberst Brunner ist n\u00e4mlich Hauptagent f\u00fcr den Kanton Z\u00fcrich."}]}
|
dbmdz/flair-clef-hipe-german-base
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"arxiv:2011.06993",
"arxiv:2010.10392",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2011.06993",
"2010.10392"
] |
[
"de"
] |
TAGS
#flair #pytorch #token-classification #sequence-tagger-model #de #arxiv-2011.06993 #arxiv-2010.10392 #license-mit #region-us
|
Triple E - Effective Ensembling of Embeddings and Language Models for NER of Historical German
==============================================================================================
Based on our paper we release a new baseline model for the German
CLEF-HIPE shared task.
In contrast to the models used in the paper, we manually sentence-segmented and normalize hyphenations and
trained a NER model using the German Europeana BERT model.
Additionally, we perform experiments with different context sizes. This approach is described in
more detail in this paper.
Results
=======
The results with different context sizes can be seen in the following table:
For model upload, we choose the best model on development score: 82.56 with a context length of 16.
Comparisons
-----------
The following figure shows the results with different context sized (on development dataset):
!German CLEF-HIPE Development Results
We perform "Almost Stochastic Order" tests as proposed in the
"Deep Dominance - How to Properly Compare Deep Neural Models" paper.
The heatmap figure is heavily inspired by the "CharacterBERT" paper.
!Almost Stochastic Order Tests on Development set
|
[] |
[
"TAGS\n#flair #pytorch #token-classification #sequence-tagger-model #de #arxiv-2011.06993 #arxiv-2010.10392 #license-mit #region-us \n"
] |
token-classification
|
flair
|
# Flair NER model trained on GermEval14 dataset
This model was trained on the official [GermEval14](https://sites.google.com/site/germeval2014ner/data)
dataset using the [Flair](https://github.com/flairNLP/flair) framework.
It uses a fine-tuned German DistilBERT model from [here](https://huggingface.co/distilbert-base-german-cased).
# Results
| Dataset \ Run | Run 1 | Run 2 | Run 3† | Run 4 | Run 5 | Avg.
| ------------- | ----- | ----- | --------- | ----- | ----- | ----
| Development | 87.05 | 86.52 | **87.34** | 86.85 | 86.46 | 86.84
| Test | 85.43 | 85.88 | 85.72 | 85.47 | 85.62 | 85.62
† denotes that this model is selected for upload.
# Flair Fine-Tuning
We used the following script to fine-tune the model on the GermEval14 dataset:
```python
from argparse import ArgumentParser
import torch, flair
# dataset, model and embedding imports
from flair.datasets import GERMEVAL_14
from flair.embeddings import TransformerWordEmbeddings
from flair.models import SequenceTagger
from flair.trainers import ModelTrainer
if __name__ == "__main__":
# All arguments that can be passed
parser = ArgumentParser()
parser.add_argument("-s", "--seeds", nargs='+', type=int, default='42') # pass list of seeds for experiments
parser.add_argument("-c", "--cuda", type=int, default=0, help="CUDA device") # which cuda device to use
parser.add_argument("-m", "--model", type=str, help="Model name (such as Hugging Face model hub name")
# Parse experimental arguments
args = parser.parse_args()
# use cuda device as passed
flair.device = f'cuda:{str(args.cuda)}'
# for each passed seed, do one experimental run
for seed in args.seeds:
flair.set_seed(seed)
# model
hf_model = args.model
# initialize embeddings
embeddings = TransformerWordEmbeddings(
model=hf_model,
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=False,
respect_document_boundaries=False,
)
# select dataset depending on which language variable is passed
corpus = GERMEVAL_14()
# make the dictionary of tags to predict
tag_dictionary = corpus.make_tag_dictionary('ner')
# init bare-bones sequence tagger (no reprojection, LSTM or CRF)
tagger: SequenceTagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
# init the model trainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# make string for output folder
output_folder = f"flert-ner-{hf_model}-{seed}"
# train with XLM parameters (AdamW, 20 epochs, small LR)
from torch.optim.lr_scheduler import OneCycleLR
trainer.train(
output_folder,
learning_rate=5.0e-5,
mini_batch_size=16,
mini_batch_chunk_size=1,
max_epochs=10,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
train_with_dev=False,
)
```
|
{"language": "de", "license": "mit", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["germeval_14"], "widget": [{"text": "Hugging Face ist eine franz\u00f6sische Firma mit Sitz in New York."}]}
|
stefan-it/flair-distilbert-ner-germeval14
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"dataset:germeval_14",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#flair #pytorch #token-classification #sequence-tagger-model #de #dataset-germeval_14 #license-mit #region-us
|
Flair NER model trained on GermEval14 dataset
=============================================
This model was trained on the official GermEval14
dataset using the Flair framework.
It uses a fine-tuned German DistilBERT model from here.
Results
=======
† denotes that this model is selected for upload.
Flair Fine-Tuning
=================
We used the following script to fine-tune the model on the GermEval14 dataset:
|
[] |
[
"TAGS\n#flair #pytorch #token-classification #sequence-tagger-model #de #dataset-germeval_14 #license-mit #region-us \n"
] |
token-classification
|
flair
|
# Towards Robust Named Entity Recognition for Historic German
Based on [our paper](https://www.aclweb.org/anthology/W19-4312/)
we release a new model trained on the LFT dataset.
**Note:** We use BPEmbeddings instead of the combination of
Wikipedia, Common Crawl and character embeddings (as used in the paper),
so save space and training/inferencing time.
# Results
| Dataset \ Run | Run 1 | Run 2 | Run 3† | Avg.
| ------------- | ----- | ----- | --------- | ------------
| Development | 76.32 | 76.13 | **76.36** | 76.27
| Test | 77.07 | 77.35 | 77.20 | 77.21
Paper reported an averaged F1-score of 77.51.
† denotes that this model is selected for upload.
|
{"language": "de", "license": "mit", "tags": ["flair", "token-classification", "sequence-tagger-model"], "inference": false}
|
dbmdz/flair-historic-ner-lft
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#flair #pytorch #token-classification #sequence-tagger-model #de #license-mit #region-us
|
Towards Robust Named Entity Recognition for Historic German
===========================================================
Based on our paper
we release a new model trained on the LFT dataset.
Note: We use BPEmbeddings instead of the combination of
Wikipedia, Common Crawl and character embeddings (as used in the paper),
so save space and training/inferencing time.
Results
=======
Paper reported an averaged F1-score of 77.51.
† denotes that this model is selected for upload.
|
[] |
[
"TAGS\n#flair #pytorch #token-classification #sequence-tagger-model #de #license-mit #region-us \n"
] |
token-classification
|
flair
|
# Towards Robust Named Entity Recognition for Historic German
Based on [our paper](https://www.aclweb.org/anthology/W19-4312/)
we release a new model trained on the ONB dataset.
**Note:** We use BPEmbeddings instead of the combination of
Wikipedia, Common Crawl and character embeddings (as used in the paper),
so save space and training/inferencing time.
# Results
| Dataset \ Run | Run 1 | Run 2 | Run 3 | Avg.
| ------------- | ----- | ----- | --------- | ------------
| Development | 86.69 | 86.13 | **87.18** | 86.67
| Test | 85.27 | 86.05 | 85.75† | 85.69
Paper reported an averaged F1-score of 85.31.
† denotes that this model is selected for upload.
|
{"language": "de", "license": "mit", "tags": ["flair", "token-classification", "sequence-tagger-model"], "widget": [{"text": "April Martin Ansclm, K. Gefangen-Auffehers Georg Sausgruber."}]}
|
dbmdz/flair-historic-ner-onb
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#flair #pytorch #token-classification #sequence-tagger-model #de #license-mit #region-us
|
Towards Robust Named Entity Recognition for Historic German
===========================================================
Based on our paper
we release a new model trained on the ONB dataset.
Note: We use BPEmbeddings instead of the combination of
Wikipedia, Common Crawl and character embeddings (as used in the paper),
so save space and training/inferencing time.
Results
=======
Paper reported an averaged F1-score of 85.31.
† denotes that this model is selected for upload.
|
[] |
[
"TAGS\n#flair #pytorch #token-classification #sequence-tagger-model #de #license-mit #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.