modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-28 06:27:35
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 523
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-28 06:27:22
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T23:04:48Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T03:34:54Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8017][1] | [0.7406][2] | [0.7706][3] | [0.7759][4] | [0.7983][5] | 77.74 ± 2.2 |
| bs4-e10-lr0.00015 | [0.7729][6] | [0.7553][7] | [0.7526][8] | [0.7547][9] | [0.7913][10] | 76.54 ± 1.49 |
| bs8-e10-lr0.00016 | [0.6638][11] | [0.5875][12] | [0.78][13] | [0.7804][14] | [0.7176][15] | 70.59 ± 7.34 |
| bs8-e10-lr0.00015 | [0.6783][16] | [0.5867][17] | [0.7229][18] | [0.7761][19] | [0.697][20] | 69.22 ± 6.22 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T23:04:47Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T01:08:39Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8017][1] | [0.7406][2] | [0.7706][3] | [0.7759][4] | [0.7983][5] | 77.74 ± 2.2 |
| bs4-e10-lr0.00015 | [0.7729][6] | [0.7553][7] | [0.7526][8] | [0.7547][9] | [0.7913][10] | 76.54 ± 1.49 |
| bs8-e10-lr0.00016 | [0.6638][11] | [0.5875][12] | [0.78][13] | [0.7804][14] | [0.7176][15] | 70.59 ± 7.34 |
| bs8-e10-lr0.00015 | [0.6783][16] | [0.5867][17] | [0.7229][18] | [0.7761][19] | [0.697][20] | 69.22 ± 6.22 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T23:04:46Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-10T23:53:25Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8017][1] | [0.7406][2] | [0.7706][3] | [0.7759][4] | [0.7983][5] | 77.74 ± 2.2 |
| bs4-e10-lr0.00015 | [0.7729][6] | [0.7553][7] | [0.7526][8] | [0.7547][9] | [0.7913][10] | 76.54 ± 1.49 |
| bs8-e10-lr0.00016 | [0.6638][11] | [0.5875][12] | [0.78][13] | [0.7804][14] | [0.7176][15] | 70.59 ± 7.34 |
| bs8-e10-lr0.00015 | [0.6783][16] | [0.5867][17] | [0.7229][18] | [0.7761][19] | [0.697][20] | 69.22 ± 6.22 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T23:04:44Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T00:49:42Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8017][1] | [0.7406][2] | [0.7706][3] | [0.7759][4] | [0.7983][5] | 77.74 ± 2.2 |
| bs4-e10-lr0.00015 | [0.7729][6] | [0.7553][7] | [0.7526][8] | [0.7547][9] | [0.7913][10] | 76.54 ± 1.49 |
| bs8-e10-lr0.00016 | [0.6638][11] | [0.5875][12] | [0.78][13] | [0.7804][14] | [0.7176][15] | 70.59 ± 7.34 |
| bs8-e10-lr0.00015 | [0.6783][16] | [0.5867][17] | [0.7229][18] | [0.7761][19] | [0.697][20] | 69.22 ± 6.22 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T23:04:44Z | 1 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T02:02:52Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8017][1] | [0.7406][2] | [0.7706][3] | [0.7759][4] | [0.7983][5] | 77.74 ± 2.2 |
| bs4-e10-lr0.00015 | [0.7729][6] | [0.7553][7] | [0.7526][8] | [0.7547][9] | [0.7913][10] | 76.54 ± 1.49 |
| bs8-e10-lr0.00016 | [0.6638][11] | [0.5875][12] | [0.78][13] | [0.7804][14] | [0.7176][15] | 70.59 ± 7.34 |
| bs8-e10-lr0.00015 | [0.6783][16] | [0.5867][17] | [0.7229][18] | [0.7761][19] | [0.697][20] | 69.22 ± 6.22 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T23:04:43Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-10T23:34:58Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8017][1] | [0.7406][2] | [0.7706][3] | [0.7759][4] | [0.7983][5] | 77.74 ± 2.2 |
| bs4-e10-lr0.00015 | [0.7729][6] | [0.7553][7] | [0.7526][8] | [0.7547][9] | [0.7913][10] | 76.54 ± 1.49 |
| bs8-e10-lr0.00016 | [0.6638][11] | [0.5875][12] | [0.78][13] | [0.7804][14] | [0.7176][15] | 70.59 ± 7.34 |
| bs8-e10-lr0.00015 | [0.6783][16] | [0.5867][17] | [0.7229][18] | [0.7761][19] | [0.697][20] | 69.22 ± 6.22 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T23:04:41Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T01:44:44Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8017][1] | [0.7406][2] | [0.7706][3] | [0.7759][4] | [0.7983][5] | 77.74 ± 2.2 |
| bs4-e10-lr0.00015 | [0.7729][6] | [0.7553][7] | [0.7526][8] | [0.7547][9] | [0.7913][10] | 76.54 ± 1.49 |
| bs8-e10-lr0.00016 | [0.6638][11] | [0.5875][12] | [0.78][13] | [0.7804][14] | [0.7176][15] | 70.59 ± 7.34 |
| bs8-e10-lr0.00015 | [0.6783][16] | [0.5867][17] | [0.7229][18] | [0.7761][19] | [0.697][20] | 69.22 ± 6.22 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
hmbyt5-preliminary/flair-hipe-2022-newseye-fr
|
hmbyt5-preliminary
| 2023-10-17T23:03:54Z | 8 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-12T03:02:01Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.793][1] | [0.803][2] | [0.8054][3] | [0.8069][4] | [0.8133][5] | 80.43 ± 0.66 |
| bs8-e10-lr0.00016 | [0.7888][6] | [0.8094][7] | [0.8043][8] | [0.8011][9] | [0.8117][10] | 80.31 ± 0.8 |
| bs4-e10-lr0.00015 | [0.7884][11] | [0.8109][12] | [0.8005][13] | [0.8083][14] | [0.8022][15] | 80.21 ± 0.78 |
| bs8-e10-lr0.00015 | [0.788][16] | [0.8003][17] | [0.8067][18] | [0.8035][19] | [0.8064][20] | 80.1 ± 0.69 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T23:03:53Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T14:37:17Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.793][1] | [0.803][2] | [0.8054][3] | [0.8069][4] | [0.8133][5] | 80.43 ± 0.66 |
| bs8-e10-lr0.00016 | [0.7888][6] | [0.8094][7] | [0.8043][8] | [0.8011][9] | [0.8117][10] | 80.31 ± 0.8 |
| bs4-e10-lr0.00015 | [0.7884][11] | [0.8109][12] | [0.8005][13] | [0.8083][14] | [0.8022][15] | 80.21 ± 0.78 |
| bs8-e10-lr0.00015 | [0.788][16] | [0.8003][17] | [0.8067][18] | [0.8035][19] | [0.8064][20] | 80.1 ± 0.69 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T23:03:52Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T01:59:39Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.793][1] | [0.803][2] | [0.8054][3] | [0.8069][4] | [0.8133][5] | 80.43 ± 0.66 |
| bs8-e10-lr0.00016 | [0.7888][6] | [0.8094][7] | [0.8043][8] | [0.8011][9] | [0.8117][10] | 80.31 ± 0.8 |
| bs4-e10-lr0.00015 | [0.7884][11] | [0.8109][12] | [0.8005][13] | [0.8083][14] | [0.8022][15] | 80.21 ± 0.78 |
| bs8-e10-lr0.00015 | [0.788][16] | [0.8003][17] | [0.8067][18] | [0.8035][19] | [0.8064][20] | 80.1 ± 0.69 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T23:03:52Z | 6 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T08:17:06Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.793][1] | [0.803][2] | [0.8054][3] | [0.8069][4] | [0.8133][5] | 80.43 ± 0.66 |
| bs8-e10-lr0.00016 | [0.7888][6] | [0.8094][7] | [0.8043][8] | [0.8011][9] | [0.8117][10] | 80.31 ± 0.8 |
| bs4-e10-lr0.00015 | [0.7884][11] | [0.8109][12] | [0.8005][13] | [0.8083][14] | [0.8022][15] | 80.21 ± 0.78 |
| bs8-e10-lr0.00015 | [0.788][16] | [0.8003][17] | [0.8067][18] | [0.8035][19] | [0.8064][20] | 80.1 ± 0.69 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T23:03:50Z | 9 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T19:12:53Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.793][1] | [0.803][2] | [0.8054][3] | [0.8069][4] | [0.8133][5] | 80.43 ± 0.66 |
| bs8-e10-lr0.00016 | [0.7888][6] | [0.8094][7] | [0.8043][8] | [0.8011][9] | [0.8117][10] | 80.31 ± 0.8 |
| bs4-e10-lr0.00015 | [0.7884][11] | [0.8109][12] | [0.8005][13] | [0.8083][14] | [0.8022][15] | 80.21 ± 0.78 |
| bs8-e10-lr0.00015 | [0.788][16] | [0.8003][17] | [0.8067][18] | [0.8035][19] | [0.8064][20] | 80.1 ± 0.69 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T23:03:49Z | 8 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T06:38:40Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.793][1] | [0.803][2] | [0.8054][3] | [0.8069][4] | [0.8133][5] | 80.43 ± 0.66 |
| bs8-e10-lr0.00016 | [0.7888][6] | [0.8094][7] | [0.8043][8] | [0.8011][9] | [0.8117][10] | 80.31 ± 0.8 |
| bs4-e10-lr0.00015 | [0.7884][11] | [0.8109][12] | [0.8005][13] | [0.8083][14] | [0.8022][15] | 80.21 ± 0.78 |
| bs8-e10-lr0.00015 | [0.788][16] | [0.8003][17] | [0.8067][18] | [0.8035][19] | [0.8064][20] | 80.1 ± 0.69 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T23:03:49Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T00:22:14Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.793][1] | [0.803][2] | [0.8054][3] | [0.8069][4] | [0.8133][5] | 80.43 ± 0.66 |
| bs8-e10-lr0.00016 | [0.7888][6] | [0.8094][7] | [0.8043][8] | [0.8011][9] | [0.8117][10] | 80.31 ± 0.8 |
| bs4-e10-lr0.00015 | [0.7884][11] | [0.8109][12] | [0.8005][13] | [0.8083][14] | [0.8022][15] | 80.21 ± 0.78 |
| bs8-e10-lr0.00015 | [0.788][16] | [0.8003][17] | [0.8067][18] | [0.8035][19] | [0.8064][20] | 80.1 ± 0.69 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T23:03:44Z | 7 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T16:06:02Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.793][1] | [0.803][2] | [0.8054][3] | [0.8069][4] | [0.8133][5] | 80.43 ± 0.66 |
| bs8-e10-lr0.00016 | [0.7888][6] | [0.8094][7] | [0.8043][8] | [0.8011][9] | [0.8117][10] | 80.31 ± 0.8 |
| bs4-e10-lr0.00015 | [0.7884][11] | [0.8109][12] | [0.8005][13] | [0.8083][14] | [0.8022][15] | 80.21 ± 0.78 |
| bs8-e10-lr0.00015 | [0.788][16] | [0.8003][17] | [0.8067][18] | [0.8035][19] | [0.8064][20] | 80.1 ± 0.69 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T23:03:43Z | 7 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T09:51:09Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.793][1] | [0.803][2] | [0.8054][3] | [0.8069][4] | [0.8133][5] | 80.43 ± 0.66 |
| bs8-e10-lr0.00016 | [0.7888][6] | [0.8094][7] | [0.8043][8] | [0.8011][9] | [0.8117][10] | 80.31 ± 0.8 |
| bs4-e10-lr0.00015 | [0.7884][11] | [0.8109][12] | [0.8005][13] | [0.8083][14] | [0.8022][15] | 80.21 ± 0.78 |
| bs8-e10-lr0.00015 | [0.788][16] | [0.8003][17] | [0.8067][18] | [0.8035][19] | [0.8064][20] | 80.1 ± 0.69 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T23:03:42Z | 8 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T03:31:04Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.793][1] | [0.803][2] | [0.8054][3] | [0.8069][4] | [0.8133][5] | 80.43 ± 0.66 |
| bs8-e10-lr0.00016 | [0.7888][6] | [0.8094][7] | [0.8043][8] | [0.8011][9] | [0.8117][10] | 80.31 ± 0.8 |
| bs4-e10-lr0.00015 | [0.7884][11] | [0.8109][12] | [0.8005][13] | [0.8083][14] | [0.8022][15] | 80.21 ± 0.78 |
| bs8-e10-lr0.00015 | [0.788][16] | [0.8003][17] | [0.8067][18] | [0.8035][19] | [0.8064][20] | 80.1 ± 0.69 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T23:03:41Z | 6 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-10T21:11:09Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.793][1] | [0.803][2] | [0.8054][3] | [0.8069][4] | [0.8133][5] | 80.43 ± 0.66 |
| bs8-e10-lr0.00016 | [0.7888][6] | [0.8094][7] | [0.8043][8] | [0.8011][9] | [0.8117][10] | 80.31 ± 0.8 |
| bs4-e10-lr0.00015 | [0.7884][11] | [0.8109][12] | [0.8005][13] | [0.8083][14] | [0.8022][15] | 80.21 ± 0.78 |
| bs8-e10-lr0.00015 | [0.788][16] | [0.8003][17] | [0.8067][18] | [0.8035][19] | [0.8064][20] | 80.1 ± 0.69 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
lljllll2219/uk-mt5-base-xlsum-v1
|
lljllll2219
| 2023-10-17T23:03:00Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:xlsum",
"base_model:kravchenko/uk-mt5-base",
"base_model:finetune:kravchenko/uk-mt5-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-10-17T22:18:17Z |
---
base_model: kravchenko/uk-mt5-base
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: uk-mt5-base-xlsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
config: ukrainian
split: validation
args: ukrainian
metrics:
- name: Rouge1
type: rouge
value: 3.8556
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uk-mt5-base-xlsum
This model is a fine-tuned version of [kravchenko/uk-mt5-base](https://huggingface.co/kravchenko/uk-mt5-base) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3660
- Rouge1: 3.8556
- Rouge2: 1.5556
- Rougel: 3.7833
- Rougelsum: 3.6889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 5.31 | 1.0 | 375 | 2.5055 | 2.3333 | 0.8 | 2.3143 | 2.3238 |
| 3.254 | 2.0 | 750 | 2.4034 | 3.5444 | 1.1111 | 3.5333 | 3.4833 |
| 2.9813 | 3.0 | 1125 | 2.3844 | 3.7278 | 1.4444 | 3.6889 | 3.6333 |
| 2.8117 | 4.0 | 1500 | 2.3785 | 3.3222 | 1.1111 | 3.2556 | 3.2167 |
| 2.681 | 5.0 | 1875 | 2.3671 | 4.1667 | 1.5556 | 4.0667 | 4.0444 |
| 2.5825 | 6.0 | 2250 | 2.3705 | 3.6889 | 1.5556 | 3.6 | 3.5333 |
| 2.5151 | 7.0 | 2625 | 2.3654 | 3.6889 | 1.5556 | 3.6 | 3.5333 |
| 2.4798 | 8.0 | 3000 | 2.3660 | 3.8556 | 1.5556 | 3.7833 | 3.6889 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T23:02:32Z | 8 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T02:35:07Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.401][1] | [0.3992][2] | [0.4115][3] | [0.4007][4] | [0.4289][5] | 40.83 ± 1.12 |
| bs8-e10-lr0.00016 | [0.4105][6] | [0.3921][7] | [0.3855][8] | [0.4079][9] | [0.4054][10] | 40.03 ± 0.97 |
| bs4-e10-lr0.00015 | [0.3954][11] | [0.3828][12] | [0.413][13] | [0.404][14] | [0.4028][15] | 39.96 ± 1.01 |
| bs8-e10-lr0.00015 | [0.4053][16] | [0.3935][17] | [0.3927][18] | [0.3794][19] | [0.4146][20] | 39.71 ± 1.2 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T23:02:32Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T18:39:14Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.401][1] | [0.3992][2] | [0.4115][3] | [0.4007][4] | [0.4289][5] | 40.83 ± 1.12 |
| bs8-e10-lr0.00016 | [0.4105][6] | [0.3921][7] | [0.3855][8] | [0.4079][9] | [0.4054][10] | 40.03 ± 0.97 |
| bs4-e10-lr0.00015 | [0.3954][11] | [0.3828][12] | [0.413][13] | [0.404][14] | [0.4028][15] | 39.96 ± 1.01 |
| bs8-e10-lr0.00015 | [0.4053][16] | [0.3935][17] | [0.3927][18] | [0.3794][19] | [0.4146][20] | 39.71 ± 1.2 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T23:02:30Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-12T23:12:32Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.401][1] | [0.3992][2] | [0.4115][3] | [0.4007][4] | [0.4289][5] | 40.83 ± 1.12 |
| bs8-e10-lr0.00016 | [0.4105][6] | [0.3921][7] | [0.3855][8] | [0.4079][9] | [0.4054][10] | 40.03 ± 0.97 |
| bs4-e10-lr0.00015 | [0.3954][11] | [0.3828][12] | [0.413][13] | [0.404][14] | [0.4028][15] | 39.96 ± 1.01 |
| bs8-e10-lr0.00015 | [0.4053][16] | [0.3935][17] | [0.3927][18] | [0.3794][19] | [0.4146][20] | 39.71 ± 1.2 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T23:02:29Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-10T22:23:37Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.401][1] | [0.3992][2] | [0.4115][3] | [0.4007][4] | [0.4289][5] | 40.83 ± 1.12 |
| bs8-e10-lr0.00016 | [0.4105][6] | [0.3921][7] | [0.3855][8] | [0.4079][9] | [0.4054][10] | 40.03 ± 0.97 |
| bs4-e10-lr0.00015 | [0.3954][11] | [0.3828][12] | [0.413][13] | [0.404][14] | [0.4028][15] | 39.96 ± 1.01 |
| bs8-e10-lr0.00015 | [0.4053][16] | [0.3935][17] | [0.3927][18] | [0.3794][19] | [0.4146][20] | 39.71 ± 1.2 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T23:02:27Z | 8 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-12T02:18:22Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.401][1] | [0.3992][2] | [0.4115][3] | [0.4007][4] | [0.4289][5] | 40.83 ± 1.12 |
| bs8-e10-lr0.00016 | [0.4105][6] | [0.3921][7] | [0.3855][8] | [0.4079][9] | [0.4054][10] | 40.03 ± 0.97 |
| bs4-e10-lr0.00015 | [0.3954][11] | [0.3828][12] | [0.413][13] | [0.404][14] | [0.4028][15] | 39.96 ± 1.01 |
| bs8-e10-lr0.00015 | [0.4053][16] | [0.3935][17] | [0.3927][18] | [0.3794][19] | [0.4146][20] | 39.71 ± 1.2 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T23:02:25Z | 8 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-10T18:15:46Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.401][1] | [0.3992][2] | [0.4115][3] | [0.4007][4] | [0.4289][5] | 40.83 ± 1.12 |
| bs8-e10-lr0.00016 | [0.4105][6] | [0.3921][7] | [0.3855][8] | [0.4079][9] | [0.4054][10] | 40.03 ± 0.97 |
| bs4-e10-lr0.00015 | [0.3954][11] | [0.3828][12] | [0.413][13] | [0.404][14] | [0.4028][15] | 39.96 ± 1.01 |
| bs8-e10-lr0.00015 | [0.4053][16] | [0.3935][17] | [0.3927][18] | [0.3794][19] | [0.4146][20] | 39.71 ± 1.2 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T23:02:23Z | 8 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T22:28:58Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.401][1] | [0.3992][2] | [0.4115][3] | [0.4007][4] | [0.4289][5] | 40.83 ± 1.12 |
| bs8-e10-lr0.00016 | [0.4105][6] | [0.3921][7] | [0.3855][8] | [0.4079][9] | [0.4054][10] | 40.03 ± 0.97 |
| bs4-e10-lr0.00015 | [0.3954][11] | [0.3828][12] | [0.413][13] | [0.404][14] | [0.4028][15] | 39.96 ± 1.01 |
| bs8-e10-lr0.00015 | [0.4053][16] | [0.3935][17] | [0.3927][18] | [0.3794][19] | [0.4146][20] | 39.71 ± 1.2 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T23:02:22Z | 8 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-11T06:33:16Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.401][1] | [0.3992][2] | [0.4115][3] | [0.4007][4] | [0.4289][5] | 40.83 ± 1.12 |
| bs8-e10-lr0.00016 | [0.4105][6] | [0.3921][7] | [0.3855][8] | [0.4079][9] | [0.4054][10] | 40.03 ± 0.97 |
| bs4-e10-lr0.00015 | [0.3954][11] | [0.3828][12] | [0.413][13] | [0.404][14] | [0.4028][15] | 39.96 ± 1.01 |
| bs8-e10-lr0.00015 | [0.4053][16] | [0.3935][17] | [0.3927][18] | [0.3794][19] | [0.4146][20] | 39.71 ± 1.2 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T23:02:21Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-09T21:19:12Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmByT5 as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.401][1] | [0.3992][2] | [0.4115][3] | [0.4007][4] | [0.4289][5] | 40.83 ± 1.12 |
| bs8-e10-lr0.00016 | [0.4105][6] | [0.3921][7] | [0.3855][8] | [0.4079][9] | [0.4054][10] | 40.03 ± 0.97 |
| bs4-e10-lr0.00015 | [0.3954][11] | [0.3828][12] | [0.413][13] | [0.404][14] | [0.4028][15] | 39.96 ± 1.01 |
| bs8-e10-lr0.00015 | [0.4053][16] | [0.3935][17] | [0.3927][18] | [0.3794][19] | [0.4146][20] | 39.71 ± 1.2 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T23:00:34Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-08T23:16:23Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les
tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi ,
719 , 826 , 4496 .
---
# Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8417][1] | [0.8404][2] | [0.8414][3] | [0.8344][4] | [0.8375][5] | 83.91 ± 0.28 |
| bs4-e10-lr0.00015 | [0.824][6] | [0.8352][7] | [0.8385][8] | [0.8204][9] | [0.8362][10] | 83.09 ± 0.72 |
| bs8-e10-lr0.00016 | [0.801][11] | [0.8155][12] | [0.8248][13] | [0.8292][14] | [0.8462][15] | 82.33 ± 1.5 |
| bs8-e10-lr0.00015 | [0.8098][16] | [0.8079][17] | [0.8248][18] | [0.8193][19] | [0.842][20] | 82.08 ± 1.23 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T23:00:32Z | 1 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-08T20:55:33Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les
tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi ,
719 , 826 , 4496 .
---
# Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8417][1] | [0.8404][2] | [0.8414][3] | [0.8344][4] | [0.8375][5] | 83.91 ± 0.28 |
| bs4-e10-lr0.00015 | [0.824][6] | [0.8352][7] | [0.8385][8] | [0.8204][9] | [0.8362][10] | 83.09 ± 0.72 |
| bs8-e10-lr0.00016 | [0.801][11] | [0.8155][12] | [0.8248][13] | [0.8292][14] | [0.8462][15] | 82.33 ± 1.5 |
| bs8-e10-lr0.00015 | [0.8098][16] | [0.8079][17] | [0.8248][18] | [0.8193][19] | [0.842][20] | 82.08 ± 1.23 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T23:00:29Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-09T00:08:40Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les
tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi ,
719 , 826 , 4496 .
---
# Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8417][1] | [0.8404][2] | [0.8414][3] | [0.8344][4] | [0.8375][5] | 83.91 ± 0.28 |
| bs4-e10-lr0.00015 | [0.824][6] | [0.8352][7] | [0.8385][8] | [0.8204][9] | [0.8362][10] | 83.09 ± 0.72 |
| bs8-e10-lr0.00016 | [0.801][11] | [0.8155][12] | [0.8248][13] | [0.8292][14] | [0.8462][15] | 82.33 ± 1.5 |
| bs8-e10-lr0.00015 | [0.8098][16] | [0.8079][17] | [0.8248][18] | [0.8193][19] | [0.842][20] | 82.08 ± 1.23 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T23:00:28Z | 1 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-08T21:47:58Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les
tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi ,
719 , 826 , 4496 .
---
# Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8417][1] | [0.8404][2] | [0.8414][3] | [0.8344][4] | [0.8375][5] | 83.91 ± 0.28 |
| bs4-e10-lr0.00015 | [0.824][6] | [0.8352][7] | [0.8385][8] | [0.8204][9] | [0.8362][10] | 83.09 ± 0.72 |
| bs8-e10-lr0.00016 | [0.801][11] | [0.8155][12] | [0.8248][13] | [0.8292][14] | [0.8462][15] | 82.33 ± 1.5 |
| bs8-e10-lr0.00015 | [0.8098][16] | [0.8079][17] | [0.8248][18] | [0.8193][19] | [0.842][20] | 82.08 ± 1.23 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T23:00:27Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-08T19:26:25Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les
tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi ,
719 , 826 , 4496 .
---
# Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8417][1] | [0.8404][2] | [0.8414][3] | [0.8344][4] | [0.8375][5] | 83.91 ± 0.28 |
| bs4-e10-lr0.00015 | [0.824][6] | [0.8352][7] | [0.8385][8] | [0.8204][9] | [0.8362][10] | 83.09 ± 0.72 |
| bs8-e10-lr0.00016 | [0.801][11] | [0.8155][12] | [0.8248][13] | [0.8292][14] | [0.8462][15] | 82.33 ± 1.5 |
| bs8-e10-lr0.00015 | [0.8098][16] | [0.8079][17] | [0.8248][18] | [0.8193][19] | [0.842][20] | 82.08 ± 1.23 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T23:00:25Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-08T22:41:43Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les
tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi ,
719 , 826 , 4496 .
---
# Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8417][1] | [0.8404][2] | [0.8414][3] | [0.8344][4] | [0.8375][5] | 83.91 ± 0.28 |
| bs4-e10-lr0.00015 | [0.824][6] | [0.8352][7] | [0.8385][8] | [0.8204][9] | [0.8362][10] | 83.09 ± 0.72 |
| bs8-e10-lr0.00016 | [0.801][11] | [0.8155][12] | [0.8248][13] | [0.8292][14] | [0.8462][15] | 82.33 ± 1.5 |
| bs8-e10-lr0.00015 | [0.8098][16] | [0.8079][17] | [0.8248][18] | [0.8193][19] | [0.842][20] | 82.08 ± 1.23 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T23:00:24Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-08T20:20:22Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les
tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi ,
719 , 826 , 4496 .
---
# Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8417][1] | [0.8404][2] | [0.8414][3] | [0.8344][4] | [0.8375][5] | 83.91 ± 0.28 |
| bs4-e10-lr0.00015 | [0.824][6] | [0.8352][7] | [0.8385][8] | [0.8204][9] | [0.8362][10] | 83.09 ± 0.72 |
| bs8-e10-lr0.00016 | [0.801][11] | [0.8155][12] | [0.8248][13] | [0.8292][14] | [0.8462][15] | 82.33 ± 1.5 |
| bs8-e10-lr0.00015 | [0.8098][16] | [0.8079][17] | [0.8248][18] | [0.8193][19] | [0.842][20] | 82.08 ± 1.23 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T23:00:23Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-08T19:09:19Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les
tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi ,
719 , 826 , 4496 .
---
# Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8417][1] | [0.8404][2] | [0.8414][3] | [0.8344][4] | [0.8375][5] | 83.91 ± 0.28 |
| bs4-e10-lr0.00015 | [0.824][6] | [0.8352][7] | [0.8385][8] | [0.8204][9] | [0.8362][10] | 83.09 ± 0.72 |
| bs8-e10-lr0.00016 | [0.801][11] | [0.8155][12] | [0.8248][13] | [0.8292][14] | [0.8462][15] | 82.33 ± 1.5 |
| bs8-e10-lr0.00015 | [0.8098][16] | [0.8079][17] | [0.8248][18] | [0.8193][19] | [0.842][20] | 82.08 ± 1.23 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:59:56Z | 6 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T13:07:33Z |
---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι .
---
# Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.842][1] | [0.8548][2] | [0.8407][3] | [0.8431][4] | [0.8443][5] | 84.5 ± 0.51 |
| bs4-e10-lr0.00015 | [0.8376][6] | [0.8343][7] | [0.8495][8] | [0.8394][9] | [0.837][10] | 83.96 ± 0.52 |
| bs8-e10-lr0.00015 | [0.8172][11] | [0.8242][12] | [0.8217][13] | [0.8367][14] | [0.8323][15] | 82.64 ± 0.71 |
| bs8-e10-lr0.00016 | [0.8178][16] | [0.8205][17] | [0.8126][18] | [0.8339][19] | [0.8264][20] | 82.22 ± 0.73 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T22:59:54Z | 10 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T10:16:17Z |
---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι .
---
# Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.842][1] | [0.8548][2] | [0.8407][3] | [0.8431][4] | [0.8443][5] | 84.5 ± 0.51 |
| bs4-e10-lr0.00015 | [0.8376][6] | [0.8343][7] | [0.8495][8] | [0.8394][9] | [0.837][10] | 83.96 ± 0.52 |
| bs8-e10-lr0.00015 | [0.8172][11] | [0.8242][12] | [0.8217][13] | [0.8367][14] | [0.8323][15] | 82.64 ± 0.71 |
| bs8-e10-lr0.00016 | [0.8178][16] | [0.8205][17] | [0.8126][18] | [0.8339][19] | [0.8264][20] | 82.22 ± 0.73 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T22:59:52Z | 7 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T11:18:26Z |
---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι .
---
# Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.842][1] | [0.8548][2] | [0.8407][3] | [0.8431][4] | [0.8443][5] | 84.5 ± 0.51 |
| bs4-e10-lr0.00015 | [0.8376][6] | [0.8343][7] | [0.8495][8] | [0.8394][9] | [0.837][10] | 83.96 ± 0.52 |
| bs8-e10-lr0.00015 | [0.8172][11] | [0.8242][12] | [0.8217][13] | [0.8367][14] | [0.8323][15] | 82.64 ± 0.71 |
| bs8-e10-lr0.00016 | [0.8178][16] | [0.8205][17] | [0.8126][18] | [0.8339][19] | [0.8264][20] | 82.22 ± 0.73 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T22:59:51Z | 9 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T16:39:54Z |
---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι .
---
# Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.842][1] | [0.8548][2] | [0.8407][3] | [0.8431][4] | [0.8443][5] | 84.5 ± 0.51 |
| bs4-e10-lr0.00015 | [0.8376][6] | [0.8343][7] | [0.8495][8] | [0.8394][9] | [0.837][10] | 83.96 ± 0.52 |
| bs8-e10-lr0.00015 | [0.8172][11] | [0.8242][12] | [0.8217][13] | [0.8367][14] | [0.8323][15] | 82.64 ± 0.71 |
| bs8-e10-lr0.00016 | [0.8178][16] | [0.8205][17] | [0.8126][18] | [0.8339][19] | [0.8264][20] | 82.22 ± 0.73 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T22:59:51Z | 11 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T09:54:20Z |
---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι .
---
# Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.842][1] | [0.8548][2] | [0.8407][3] | [0.8431][4] | [0.8443][5] | 84.5 ± 0.51 |
| bs4-e10-lr0.00015 | [0.8376][6] | [0.8343][7] | [0.8495][8] | [0.8394][9] | [0.837][10] | 83.96 ± 0.52 |
| bs8-e10-lr0.00015 | [0.8172][11] | [0.8242][12] | [0.8217][13] | [0.8367][14] | [0.8323][15] | 82.64 ± 0.71 |
| bs8-e10-lr0.00016 | [0.8178][16] | [0.8205][17] | [0.8126][18] | [0.8339][19] | [0.8264][20] | 82.22 ± 0.73 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T22:59:50Z | 10 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T15:14:25Z |
---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι .
---
# Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.842][1] | [0.8548][2] | [0.8407][3] | [0.8431][4] | [0.8443][5] | 84.5 ± 0.51 |
| bs4-e10-lr0.00015 | [0.8376][6] | [0.8343][7] | [0.8495][8] | [0.8394][9] | [0.837][10] | 83.96 ± 0.52 |
| bs8-e10-lr0.00015 | [0.8172][11] | [0.8242][12] | [0.8217][13] | [0.8367][14] | [0.8323][15] | 82.64 ± 0.71 |
| bs8-e10-lr0.00016 | [0.8178][16] | [0.8205][17] | [0.8126][18] | [0.8339][19] | [0.8264][20] | 82.22 ± 0.73 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T22:59:49Z | 9 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T12:22:21Z |
---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι .
---
# Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.842][1] | [0.8548][2] | [0.8407][3] | [0.8431][4] | [0.8443][5] | 84.5 ± 0.51 |
| bs4-e10-lr0.00015 | [0.8376][6] | [0.8343][7] | [0.8495][8] | [0.8394][9] | [0.837][10] | 83.96 ± 0.52 |
| bs8-e10-lr0.00015 | [0.8172][11] | [0.8242][12] | [0.8217][13] | [0.8367][14] | [0.8323][15] | 82.64 ± 0.71 |
| bs8-e10-lr0.00016 | [0.8178][16] | [0.8205][17] | [0.8126][18] | [0.8339][19] | [0.8264][20] | 82.22 ± 0.73 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T22:59:47Z | 8 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T16:19:20Z |
---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι .
---
# Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.842][1] | [0.8548][2] | [0.8407][3] | [0.8431][4] | [0.8443][5] | 84.5 ± 0.51 |
| bs4-e10-lr0.00015 | [0.8376][6] | [0.8343][7] | [0.8495][8] | [0.8394][9] | [0.837][10] | 83.96 ± 0.52 |
| bs8-e10-lr0.00015 | [0.8172][11] | [0.8242][12] | [0.8217][13] | [0.8367][14] | [0.8323][15] | 82.64 ± 0.71 |
| bs8-e10-lr0.00016 | [0.8178][16] | [0.8205][17] | [0.8126][18] | [0.8339][19] | [0.8264][20] | 82.22 ± 0.73 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:59:46Z | 16 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T13:28:32Z |
---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι .
---
# Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.842][1] | [0.8548][2] | [0.8407][3] | [0.8431][4] | [0.8443][5] | 84.5 ± 0.51 |
| bs4-e10-lr0.00015 | [0.8376][6] | [0.8343][7] | [0.8495][8] | [0.8394][9] | [0.837][10] | 83.96 ± 0.52 |
| bs8-e10-lr0.00015 | [0.8172][11] | [0.8242][12] | [0.8217][13] | [0.8367][14] | [0.8323][15] | 82.64 ± 0.71 |
| bs8-e10-lr0.00016 | [0.8178][16] | [0.8205][17] | [0.8126][18] | [0.8339][19] | [0.8264][20] | 82.22 ± 0.73 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T22:59:45Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"en",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T12:01:36Z |
---
language: en
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: Cp . Eur . Phoen . 240 , 1 , αἷμα ddiov φλέγέι .
---
# Fine-tuned Flair Model on AjMC English NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC English](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.842][1] | [0.8548][2] | [0.8407][3] | [0.8431][4] | [0.8443][5] | 84.5 ± 0.51 |
| bs4-e10-lr0.00015 | [0.8376][6] | [0.8343][7] | [0.8495][8] | [0.8394][9] | [0.837][10] | 83.96 ± 0.52 |
| bs8-e10-lr0.00015 | [0.8172][11] | [0.8242][12] | [0.8217][13] | [0.8367][14] | [0.8323][15] | 82.64 ± 0.71 |
| bs8-e10-lr0.00016 | [0.8178][16] | [0.8205][17] | [0.8126][18] | [0.8339][19] | [0.8264][20] | 82.22 ± 0.73 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-en-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T22:57:33Z | 6 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-07T02:31:50Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in
den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging
, das Stück Σαλαμίνιαι folgte .
---
# Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8892][1] | [0.8913][2] | [0.8867][3] | [0.8843][4] | [0.8828][5] | 88.69 ± 0.31 |
| bs4-e10-lr0.00015 | [0.8786][6] | [0.8793][7] | [0.883][8] | [0.8807][9] | [0.8722][10] | 87.88 ± 0.36 |
| bs8-e10-lr0.00016 | [0.8602][11] | [0.8684][12] | [0.8643][13] | [0.8643][14] | [0.8623][15] | 86.39 ± 0.27 |
| bs8-e10-lr0.00015 | [0.8551][16] | [0.8707][17] | [0.8599][18] | [0.8609][19] | [0.8612][20] | 86.16 ± 0.51 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T22:57:32Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-07T01:14:58Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in
den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging
, das Stück Σαλαμίνιαι folgte .
---
# Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8892][1] | [0.8913][2] | [0.8867][3] | [0.8843][4] | [0.8828][5] | 88.69 ± 0.31 |
| bs4-e10-lr0.00015 | [0.8786][6] | [0.8793][7] | [0.883][8] | [0.8807][9] | [0.8722][10] | 87.88 ± 0.36 |
| bs8-e10-lr0.00016 | [0.8602][11] | [0.8684][12] | [0.8643][13] | [0.8643][14] | [0.8623][15] | 86.39 ± 0.27 |
| bs8-e10-lr0.00015 | [0.8551][16] | [0.8707][17] | [0.8599][18] | [0.8609][19] | [0.8612][20] | 86.16 ± 0.51 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T22:57:27Z | 6 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-07T02:12:03Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in
den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging
, das Stück Σαλαμίνιαι folgte .
---
# Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8892][1] | [0.8913][2] | [0.8867][3] | [0.8843][4] | [0.8828][5] | 88.69 ± 0.31 |
| bs4-e10-lr0.00015 | [0.8786][6] | [0.8793][7] | [0.883][8] | [0.8807][9] | [0.8722][10] | 87.88 ± 0.36 |
| bs8-e10-lr0.00016 | [0.8602][11] | [0.8684][12] | [0.8643][13] | [0.8643][14] | [0.8623][15] | 86.39 ± 0.27 |
| bs8-e10-lr0.00015 | [0.8551][16] | [0.8707][17] | [0.8599][18] | [0.8609][19] | [0.8612][20] | 86.16 ± 0.51 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:57:25Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T23:37:31Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in
den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging
, das Stück Σαλαμίνιαι folgte .
---
# Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8892][1] | [0.8913][2] | [0.8867][3] | [0.8843][4] | [0.8828][5] | 88.69 ± 0.31 |
| bs4-e10-lr0.00015 | [0.8786][6] | [0.8793][7] | [0.883][8] | [0.8807][9] | [0.8722][10] | 87.88 ± 0.36 |
| bs8-e10-lr0.00016 | [0.8602][11] | [0.8684][12] | [0.8643][13] | [0.8643][14] | [0.8623][15] | 86.39 ± 0.27 |
| bs8-e10-lr0.00015 | [0.8551][16] | [0.8707][17] | [0.8599][18] | [0.8609][19] | [0.8612][20] | 86.16 ± 0.51 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T22:57:24Z | 6 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T22:19:25Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in
den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging
, das Stück Σαλαμίνιαι folgte .
---
# Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8892][1] | [0.8913][2] | [0.8867][3] | [0.8843][4] | [0.8828][5] | 88.69 ± 0.31 |
| bs4-e10-lr0.00015 | [0.8786][6] | [0.8793][7] | [0.883][8] | [0.8807][9] | [0.8722][10] | 87.88 ± 0.36 |
| bs8-e10-lr0.00016 | [0.8602][11] | [0.8684][12] | [0.8643][13] | [0.8643][14] | [0.8623][15] | 86.39 ± 0.27 |
| bs8-e10-lr0.00015 | [0.8551][16] | [0.8707][17] | [0.8599][18] | [0.8609][19] | [0.8612][20] | 86.16 ± 0.51 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T22:57:23Z | 9 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T21:01:13Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in
den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging
, das Stück Σαλαμίνιαι folgte .
---
# Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8892][1] | [0.8913][2] | [0.8867][3] | [0.8843][4] | [0.8828][5] | 88.69 ± 0.31 |
| bs4-e10-lr0.00015 | [0.8786][6] | [0.8793][7] | [0.883][8] | [0.8807][9] | [0.8722][10] | 87.88 ± 0.36 |
| bs8-e10-lr0.00016 | [0.8602][11] | [0.8684][12] | [0.8643][13] | [0.8643][14] | [0.8623][15] | 86.39 ± 0.27 |
| bs8-e10-lr0.00015 | [0.8551][16] | [0.8707][17] | [0.8599][18] | [0.8609][19] | [0.8612][20] | 86.16 ± 0.51 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:57:20Z | 7 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-07T00:35:11Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in
den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging
, das Stück Σαλαμίνιαι folgte .
---
# Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8892][1] | [0.8913][2] | [0.8867][3] | [0.8843][4] | [0.8828][5] | 88.69 ± 0.31 |
| bs4-e10-lr0.00015 | [0.8786][6] | [0.8793][7] | [0.883][8] | [0.8807][9] | [0.8722][10] | 87.88 ± 0.36 |
| bs8-e10-lr0.00016 | [0.8602][11] | [0.8684][12] | [0.8643][13] | [0.8643][14] | [0.8623][15] | 86.39 ± 0.27 |
| bs8-e10-lr0.00015 | [0.8551][16] | [0.8707][17] | [0.8599][18] | [0.8609][19] | [0.8612][20] | 86.16 ± 0.51 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T22:57:12Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T22:58:32Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in
den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging
, das Stück Σαλαμίνιαι folgte .
---
# Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8892][1] | [0.8913][2] | [0.8867][3] | [0.8843][4] | [0.8828][5] | 88.69 ± 0.31 |
| bs4-e10-lr0.00015 | [0.8786][6] | [0.8793][7] | [0.883][8] | [0.8807][9] | [0.8722][10] | 87.88 ± 0.36 |
| bs8-e10-lr0.00016 | [0.8602][11] | [0.8684][12] | [0.8643][13] | [0.8643][14] | [0.8623][15] | 86.39 ± 0.27 |
| bs8-e10-lr0.00015 | [0.8551][16] | [0.8707][17] | [0.8599][18] | [0.8609][19] | [0.8612][20] | 86.16 ± 0.51 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T22:57:11Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"base_model:finetune:hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-06T21:40:21Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
- text: — Dramatiſch war der Stoff vor Sophokles von Äſchylos behandelt worden in
den Θροῇσσαι , denen vielleicht in der Trilogie das Stüc>"OnJw» κοίσις vorherging
, das Stück Σαλαμίνιαι folgte .
---
# Fine-tuned Flair Model on AjMC German NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC German](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmByT5 as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# ⚠️ Inference Widget ⚠️
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
This class needs to be present when running the model with Flair.
Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[0.00015, 0.00016]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-------------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr0.00016 | [0.8892][1] | [0.8913][2] | [0.8867][3] | [0.8843][4] | [0.8828][5] | 88.69 ± 0.31 |
| bs4-e10-lr0.00015 | [0.8786][6] | [0.8793][7] | [0.883][8] | [0.8807][9] | [0.8722][10] | 87.88 ± 0.36 |
| bs8-e10-lr0.00016 | [0.8602][11] | [0.8684][12] | [0.8643][13] | [0.8643][14] | [0.8623][15] | 86.39 ± 0.27 |
| bs8-e10-lr0.00015 | [0.8551][16] | [0.8707][17] | [0.8599][18] | [0.8609][19] | [0.8612][20] | 86.16 ± 0.51 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
BornSaint/bark_pt-br_voice_for_read
|
BornSaint
| 2023-10-17T22:52:47Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2023-10-17T22:50:39Z |
---
license: cc-by-nc-nd-4.0
---
|
stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T22:29:48Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T18:22:35Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmTEAMS as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.7969][1] | [0.7966][2] | [0.8066][3] | [0.8021][4] | [0.8053][5] | 80.15 ± 0.41 |
| bs8-e10-lr5e-05 | [0.8025][6] | [0.794][7] | [0.8013][8] | [0.7942][9] | [0.794][10] | 79.72 ± 0.39 |
| bs4-e10-lr3e-05 | [0.7931][11] | [0.7947][12] | [0.802][13] | [0.8034][14] | [0.7878][15] | 79.62 ± 0.58 |
| bs4-e10-lr5e-05 | [0.7904][16] | [0.7948][17] | [0.7858][18] | [0.7958][19] | [0.7883][20] | 79.1 ± 0.38 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T22:29:48Z | 1 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T19:12:04Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmTEAMS as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.7969][1] | [0.7966][2] | [0.8066][3] | [0.8021][4] | [0.8053][5] | 80.15 ± 0.41 |
| bs8-e10-lr5e-05 | [0.8025][6] | [0.794][7] | [0.8013][8] | [0.7942][9] | [0.794][10] | 79.72 ± 0.39 |
| bs4-e10-lr3e-05 | [0.7931][11] | [0.7947][12] | [0.802][13] | [0.8034][14] | [0.7878][15] | 79.62 ± 0.58 |
| bs4-e10-lr5e-05 | [0.7904][16] | [0.7948][17] | [0.7858][18] | [0.7958][19] | [0.7883][20] | 79.1 ± 0.38 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T22:29:47Z | 0 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T16:46:51Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmTEAMS as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.7969][1] | [0.7966][2] | [0.8066][3] | [0.8021][4] | [0.8053][5] | 80.15 ± 0.41 |
| bs8-e10-lr5e-05 | [0.8025][6] | [0.794][7] | [0.8013][8] | [0.7942][9] | [0.794][10] | 79.72 ± 0.39 |
| bs4-e10-lr3e-05 | [0.7931][11] | [0.7947][12] | [0.802][13] | [0.8034][14] | [0.7878][15] | 79.62 ± 0.58 |
| bs4-e10-lr5e-05 | [0.7904][16] | [0.7948][17] | [0.7858][18] | [0.7958][19] | [0.7883][20] | 79.1 ± 0.38 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T22:29:46Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T18:08:20Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmTEAMS as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.7969][1] | [0.7966][2] | [0.8066][3] | [0.8021][4] | [0.8053][5] | 80.15 ± 0.41 |
| bs8-e10-lr5e-05 | [0.8025][6] | [0.794][7] | [0.8013][8] | [0.7942][9] | [0.794][10] | 79.72 ± 0.39 |
| bs4-e10-lr3e-05 | [0.7931][11] | [0.7947][12] | [0.802][13] | [0.8034][14] | [0.7878][15] | 79.62 ± 0.58 |
| bs4-e10-lr5e-05 | [0.7904][16] | [0.7948][17] | [0.7858][18] | [0.7958][19] | [0.7883][20] | 79.1 ± 0.38 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:29:45Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T17:20:18Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmTEAMS as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.7969][1] | [0.7966][2] | [0.8066][3] | [0.8021][4] | [0.8053][5] | 80.15 ± 0.41 |
| bs8-e10-lr5e-05 | [0.8025][6] | [0.794][7] | [0.8013][8] | [0.7942][9] | [0.794][10] | 79.72 ± 0.39 |
| bs4-e10-lr3e-05 | [0.7931][11] | [0.7947][12] | [0.802][13] | [0.8034][14] | [0.7878][15] | 79.62 ± 0.58 |
| bs4-e10-lr5e-05 | [0.7904][16] | [0.7948][17] | [0.7858][18] | [0.7958][19] | [0.7883][20] | 79.1 ± 0.38 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T22:29:44Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T17:53:54Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmTEAMS as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.7969][1] | [0.7966][2] | [0.8066][3] | [0.8021][4] | [0.8053][5] | 80.15 ± 0.41 |
| bs8-e10-lr5e-05 | [0.8025][6] | [0.794][7] | [0.8013][8] | [0.7942][9] | [0.794][10] | 79.72 ± 0.39 |
| bs4-e10-lr3e-05 | [0.7931][11] | [0.7947][12] | [0.802][13] | [0.8034][14] | [0.7878][15] | 79.62 ± 0.58 |
| bs4-e10-lr5e-05 | [0.7904][16] | [0.7948][17] | [0.7858][18] | [0.7958][19] | [0.7883][20] | 79.1 ± 0.38 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:29:43Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T17:06:00Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmTEAMS as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.7969][1] | [0.7966][2] | [0.8066][3] | [0.8021][4] | [0.8053][5] | 80.15 ± 0.41 |
| bs8-e10-lr5e-05 | [0.8025][6] | [0.794][7] | [0.8013][8] | [0.7942][9] | [0.794][10] | 79.72 ± 0.39 |
| bs4-e10-lr3e-05 | [0.7931][11] | [0.7947][12] | [0.802][13] | [0.8034][14] | [0.7878][15] | 79.62 ± 0.58 |
| bs4-e10-lr5e-05 | [0.7904][16] | [0.7948][17] | [0.7858][18] | [0.7958][19] | [0.7883][20] | 79.1 ± 0.38 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T22:29:43Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T16:17:58Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmTEAMS as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.7969][1] | [0.7966][2] | [0.8066][3] | [0.8021][4] | [0.8053][5] | 80.15 ± 0.41 |
| bs8-e10-lr5e-05 | [0.8025][6] | [0.794][7] | [0.8013][8] | [0.7942][9] | [0.794][10] | 79.72 ± 0.39 |
| bs4-e10-lr3e-05 | [0.7931][11] | [0.7947][12] | [0.802][13] | [0.8034][14] | [0.7878][15] | 79.62 ± 0.58 |
| bs4-e10-lr5e-05 | [0.7904][16] | [0.7948][17] | [0.7858][18] | [0.7958][19] | [0.7883][20] | 79.1 ± 0.38 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T22:29:42Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T18:32:19Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmTEAMS as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.7969][1] | [0.7966][2] | [0.8066][3] | [0.8021][4] | [0.8053][5] | 80.15 ± 0.41 |
| bs8-e10-lr5e-05 | [0.8025][6] | [0.794][7] | [0.8013][8] | [0.7942][9] | [0.794][10] | 79.72 ± 0.39 |
| bs4-e10-lr3e-05 | [0.7931][11] | [0.7947][12] | [0.802][13] | [0.8034][14] | [0.7878][15] | 79.62 ± 0.58 |
| bs4-e10-lr5e-05 | [0.7904][16] | [0.7948][17] | [0.7858][18] | [0.7958][19] | [0.7883][20] | 79.1 ± 0.38 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T22:29:42Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T17:44:24Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmTEAMS as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.7969][1] | [0.7966][2] | [0.8066][3] | [0.8021][4] | [0.8053][5] | 80.15 ± 0.41 |
| bs8-e10-lr5e-05 | [0.8025][6] | [0.794][7] | [0.8013][8] | [0.7942][9] | [0.794][10] | 79.72 ± 0.39 |
| bs4-e10-lr3e-05 | [0.7931][11] | [0.7947][12] | [0.802][13] | [0.8034][14] | [0.7878][15] | 79.62 ± 0.58 |
| bs4-e10-lr5e-05 | [0.7904][16] | [0.7948][17] | [0.7858][18] | [0.7958][19] | [0.7883][20] | 79.1 ± 0.38 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
hmteams/flair-hipe-2022-hipe2020-de
|
hmteams
| 2023-10-17T22:29:41Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T16:56:25Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmTEAMS as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.7969][1] | [0.7966][2] | [0.8066][3] | [0.8021][4] | [0.8053][5] | 80.15 ± 0.41 |
| bs8-e10-lr5e-05 | [0.8025][6] | [0.794][7] | [0.8013][8] | [0.7942][9] | [0.794][10] | 79.72 ± 0.39 |
| bs4-e10-lr3e-05 | [0.7931][11] | [0.7947][12] | [0.802][13] | [0.8034][14] | [0.7878][15] | 79.62 ± 0.58 |
| bs4-e10-lr5e-05 | [0.7904][16] | [0.7948][17] | [0.7858][18] | [0.7958][19] | [0.7883][20] | 79.1 ± 0.38 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T22:29:40Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T15:20:01Z |
---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Es war am 25sten , als Lord Corn wollis Dublin mit seinem Gefolge und mehrern
Truppen verließ , um in einer Central - Lage bey Sligo die Operationen der Armee
persönlich zu dirigiren . Der Feind dürfte bald in die Enge kommen , da Gen .
Lacke mit 6000 Mann ihm entgegen marschirt .
---
# Fine-tuned Flair Model on German HIPE-2020 Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German HIPE-2020](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-hipe2020.md)
NER Dataset using hmTEAMS as backbone LM.
The HIPE-2020 dataset is comprised of newspapers from mid 19C to mid 20C. For information can be found
[here](https://dl.acm.org/doi/abs/10.1007/978-3-030-58219-7_21).
The following NEs were annotated: `loc`, `org`, `pers`, `prod`, `time` and `comp`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.7969][1] | [0.7966][2] | [0.8066][3] | [0.8021][4] | [0.8053][5] | 80.15 ± 0.41 |
| bs8-e10-lr5e-05 | [0.8025][6] | [0.794][7] | [0.8013][8] | [0.7942][9] | [0.794][10] | 79.72 ± 0.39 |
| bs4-e10-lr3e-05 | [0.7931][11] | [0.7947][12] | [0.802][13] | [0.8034][14] | [0.7878][15] | 79.62 ± 0.58 |
| bs4-e10-lr5e-05 | [0.7904][16] | [0.7948][17] | [0.7858][18] | [0.7958][19] | [0.7883][20] | 79.1 ± 0.38 |
[1]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-hipe2020-de-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T22:29:37Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T20:38:47Z |
---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8224][1] | [0.8222][2] | [0.8352][3] | [0.8466][4] | [0.8216][5] | 82.96 ± 0.99 |
| bs4-e10-lr3e-05 | [0.819][6] | [0.8118][7] | [0.825][8] | [0.8367][9] | [0.8364][10] | 82.58 ± 0.97 |
| bs8-e10-lr5e-05 | [0.8355][11] | [0.8199][12] | [0.8253][13] | [0.8349][14] | [0.8125][15] | 82.56 ± 0.88 |
| bs8-e10-lr3e-05 | [0.8194][16] | [0.8235][17] | [0.8183][18] | [0.8305][19] | [0.8163][20] | 82.16 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
hmteams/flair-hipe-2022-newseye-sv
|
hmteams
| 2023-10-17T22:29:36Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T20:25:29Z |
---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8224][1] | [0.8222][2] | [0.8352][3] | [0.8466][4] | [0.8216][5] | 82.96 ± 0.99 |
| bs4-e10-lr3e-05 | [0.819][6] | [0.8118][7] | [0.825][8] | [0.8367][9] | [0.8364][10] | 82.58 ± 0.97 |
| bs8-e10-lr5e-05 | [0.8355][11] | [0.8199][12] | [0.8253][13] | [0.8349][14] | [0.8125][15] | 82.56 ± 0.88 |
| bs8-e10-lr3e-05 | [0.8194][16] | [0.8235][17] | [0.8183][18] | [0.8305][19] | [0.8163][20] | 82.16 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:29:36Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T20:11:59Z |
---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8224][1] | [0.8222][2] | [0.8352][3] | [0.8466][4] | [0.8216][5] | 82.96 ± 0.99 |
| bs4-e10-lr3e-05 | [0.819][6] | [0.8118][7] | [0.825][8] | [0.8367][9] | [0.8364][10] | 82.58 ± 0.97 |
| bs8-e10-lr5e-05 | [0.8355][11] | [0.8199][12] | [0.8253][13] | [0.8349][14] | [0.8125][15] | 82.56 ± 0.88 |
| bs8-e10-lr3e-05 | [0.8194][16] | [0.8235][17] | [0.8183][18] | [0.8305][19] | [0.8163][20] | 82.16 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T22:29:35Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T19:45:04Z |
---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8224][1] | [0.8222][2] | [0.8352][3] | [0.8466][4] | [0.8216][5] | 82.96 ± 0.99 |
| bs4-e10-lr3e-05 | [0.819][6] | [0.8118][7] | [0.825][8] | [0.8367][9] | [0.8364][10] | 82.58 ± 0.97 |
| bs8-e10-lr5e-05 | [0.8355][11] | [0.8199][12] | [0.8253][13] | [0.8349][14] | [0.8125][15] | 82.56 ± 0.88 |
| bs8-e10-lr3e-05 | [0.8194][16] | [0.8235][17] | [0.8183][18] | [0.8305][19] | [0.8163][20] | 82.16 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T22:29:34Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T20:22:00Z |
---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8224][1] | [0.8222][2] | [0.8352][3] | [0.8466][4] | [0.8216][5] | 82.96 ± 0.99 |
| bs4-e10-lr3e-05 | [0.819][6] | [0.8118][7] | [0.825][8] | [0.8367][9] | [0.8364][10] | 82.58 ± 0.97 |
| bs8-e10-lr5e-05 | [0.8355][11] | [0.8199][12] | [0.8253][13] | [0.8349][14] | [0.8125][15] | 82.56 ± 0.88 |
| bs8-e10-lr3e-05 | [0.8194][16] | [0.8235][17] | [0.8183][18] | [0.8305][19] | [0.8163][20] | 82.16 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T22:29:33Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T19:54:52Z |
---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8224][1] | [0.8222][2] | [0.8352][3] | [0.8466][4] | [0.8216][5] | 82.96 ± 0.99 |
| bs4-e10-lr3e-05 | [0.819][6] | [0.8118][7] | [0.825][8] | [0.8367][9] | [0.8364][10] | 82.58 ± 0.97 |
| bs8-e10-lr5e-05 | [0.8355][11] | [0.8199][12] | [0.8253][13] | [0.8349][14] | [0.8125][15] | 82.56 ± 0.88 |
| bs8-e10-lr3e-05 | [0.8194][16] | [0.8235][17] | [0.8183][18] | [0.8305][19] | [0.8163][20] | 82.16 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T22:29:32Z | 1 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T20:45:06Z |
---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8224][1] | [0.8222][2] | [0.8352][3] | [0.8466][4] | [0.8216][5] | 82.96 ± 0.99 |
| bs4-e10-lr3e-05 | [0.819][6] | [0.8118][7] | [0.825][8] | [0.8367][9] | [0.8364][10] | 82.58 ± 0.97 |
| bs8-e10-lr5e-05 | [0.8355][11] | [0.8199][12] | [0.8253][13] | [0.8349][14] | [0.8125][15] | 82.56 ± 0.88 |
| bs8-e10-lr3e-05 | [0.8194][16] | [0.8235][17] | [0.8183][18] | [0.8305][19] | [0.8163][20] | 82.16 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:29:31Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T20:18:32Z |
---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8224][1] | [0.8222][2] | [0.8352][3] | [0.8466][4] | [0.8216][5] | 82.96 ± 0.99 |
| bs4-e10-lr3e-05 | [0.819][6] | [0.8118][7] | [0.825][8] | [0.8367][9] | [0.8364][10] | 82.58 ± 0.97 |
| bs8-e10-lr5e-05 | [0.8355][11] | [0.8199][12] | [0.8253][13] | [0.8349][14] | [0.8125][15] | 82.56 ± 0.88 |
| bs8-e10-lr3e-05 | [0.8194][16] | [0.8235][17] | [0.8183][18] | [0.8305][19] | [0.8163][20] | 82.16 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T22:29:31Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T19:51:27Z |
---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8224][1] | [0.8222][2] | [0.8352][3] | [0.8466][4] | [0.8216][5] | 82.96 ± 0.99 |
| bs4-e10-lr3e-05 | [0.819][6] | [0.8118][7] | [0.825][8] | [0.8367][9] | [0.8364][10] | 82.58 ± 0.97 |
| bs8-e10-lr5e-05 | [0.8355][11] | [0.8199][12] | [0.8253][13] | [0.8349][14] | [0.8125][15] | 82.56 ± 0.88 |
| bs8-e10-lr3e-05 | [0.8194][16] | [0.8235][17] | [0.8183][18] | [0.8305][19] | [0.8163][20] | 82.16 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T22:29:30Z | 1 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T20:28:42Z |
---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8224][1] | [0.8222][2] | [0.8352][3] | [0.8466][4] | [0.8216][5] | 82.96 ± 0.99 |
| bs4-e10-lr3e-05 | [0.819][6] | [0.8118][7] | [0.825][8] | [0.8367][9] | [0.8364][10] | 82.58 ± 0.97 |
| bs8-e10-lr5e-05 | [0.8355][11] | [0.8199][12] | [0.8253][13] | [0.8349][14] | [0.8125][15] | 82.56 ± 0.88 |
| bs8-e10-lr3e-05 | [0.8194][16] | [0.8235][17] | [0.8183][18] | [0.8305][19] | [0.8163][20] | 82.16 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T22:29:29Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T19:48:16Z |
---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8224][1] | [0.8222][2] | [0.8352][3] | [0.8466][4] | [0.8216][5] | 82.96 ± 0.99 |
| bs4-e10-lr3e-05 | [0.819][6] | [0.8118][7] | [0.825][8] | [0.8367][9] | [0.8364][10] | 82.58 ± 0.97 |
| bs8-e10-lr5e-05 | [0.8355][11] | [0.8199][12] | [0.8253][13] | [0.8349][14] | [0.8125][15] | 82.56 ± 0.88 |
| bs8-e10-lr3e-05 | [0.8194][16] | [0.8235][17] | [0.8183][18] | [0.8305][19] | [0.8163][20] | 82.16 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:29:29Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"sv",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T20:15:13Z |
---
language: sv
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Värri , Teittinen , Forsman , Tensik - kala m . fl . anslöto sig till reservatio
- nen , hvaremot lm Fieandt , Huopo - nen , Koskelin , Leppänen , ( Li - belits
) , Eklund m . fl . förordade ut - skottets formulering af § 11 .
---
# Fine-tuned Flair Model on Swedish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Swedish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr5e-05 | [0.8224][1] | [0.8222][2] | [0.8352][3] | [0.8466][4] | [0.8216][5] | 82.96 ± 0.99 |
| bs4-e10-lr3e-05 | [0.819][6] | [0.8118][7] | [0.825][8] | [0.8367][9] | [0.8364][10] | 82.58 ± 0.97 |
| bs8-e10-lr5e-05 | [0.8355][11] | [0.8199][12] | [0.8253][13] | [0.8349][14] | [0.8125][15] | 82.56 ± 0.88 |
| bs8-e10-lr3e-05 | [0.8194][16] | [0.8235][17] | [0.8183][18] | [0.8305][19] | [0.8163][20] | 82.16 ± 0.5 |
[1]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-sv-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:29:25Z | 1 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T18:01:16Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr5e-05 | [0.7843][1] | [0.8125][2] | [0.7775][3] | [0.7964][4] | [0.7815][5] | 79.04 ± 1.27 |
| bs4-e10-lr3e-05 | [0.7749][6] | [0.7865][7] | [0.793][8] | [0.7664][9] | [0.7759][10] | 77.93 ± 0.93 |
| bs8-e10-lr3e-05 | [0.7686][11] | [0.7766][12] | [0.7793][13] | [0.7661][14] | [0.7835][15] | 77.48 ± 0.65 |
| bs4-e10-lr5e-05 | [0.7632][16] | [0.7716][17] | [0.7775][18] | [0.7742][19] | [0.7865][20] | 77.46 ± 0.76 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T22:29:23Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T18:25:21Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr5e-05 | [0.7843][1] | [0.8125][2] | [0.7775][3] | [0.7964][4] | [0.7815][5] | 79.04 ± 1.27 |
| bs4-e10-lr3e-05 | [0.7749][6] | [0.7865][7] | [0.793][8] | [0.7664][9] | [0.7759][10] | 77.93 ± 0.93 |
| bs8-e10-lr3e-05 | [0.7686][11] | [0.7766][12] | [0.7793][13] | [0.7661][14] | [0.7835][15] | 77.48 ± 0.65 |
| bs4-e10-lr5e-05 | [0.7632][16] | [0.7716][17] | [0.7775][18] | [0.7742][19] | [0.7865][20] | 77.46 ± 0.76 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T22:29:22Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T17:30:12Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr5e-05 | [0.7843][1] | [0.8125][2] | [0.7775][3] | [0.7964][4] | [0.7815][5] | 79.04 ± 1.27 |
| bs4-e10-lr3e-05 | [0.7749][6] | [0.7865][7] | [0.793][8] | [0.7664][9] | [0.7759][10] | 77.93 ± 0.93 |
| bs8-e10-lr3e-05 | [0.7686][11] | [0.7766][12] | [0.7793][13] | [0.7661][14] | [0.7835][15] | 77.48 ± 0.65 |
| bs4-e10-lr5e-05 | [0.7632][16] | [0.7716][17] | [0.7775][18] | [0.7742][19] | [0.7865][20] | 77.46 ± 0.76 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T22:29:22Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T17:44:01Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr5e-05 | [0.7843][1] | [0.8125][2] | [0.7775][3] | [0.7964][4] | [0.7815][5] | 79.04 ± 1.27 |
| bs4-e10-lr3e-05 | [0.7749][6] | [0.7865][7] | [0.793][8] | [0.7664][9] | [0.7759][10] | 77.93 ± 0.93 |
| bs8-e10-lr3e-05 | [0.7686][11] | [0.7766][12] | [0.7793][13] | [0.7661][14] | [0.7835][15] | 77.48 ± 0.65 |
| bs4-e10-lr5e-05 | [0.7632][16] | [0.7716][17] | [0.7775][18] | [0.7742][19] | [0.7865][20] | 77.46 ± 0.76 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T22:29:21Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T18:21:42Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr5e-05 | [0.7843][1] | [0.8125][2] | [0.7775][3] | [0.7964][4] | [0.7815][5] | 79.04 ± 1.27 |
| bs4-e10-lr3e-05 | [0.7749][6] | [0.7865][7] | [0.793][8] | [0.7664][9] | [0.7759][10] | 77.93 ± 0.93 |
| bs8-e10-lr3e-05 | [0.7686][11] | [0.7766][12] | [0.7793][13] | [0.7661][14] | [0.7835][15] | 77.48 ± 0.65 |
| bs4-e10-lr5e-05 | [0.7632][16] | [0.7716][17] | [0.7775][18] | [0.7742][19] | [0.7865][20] | 77.46 ± 0.76 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:29:20Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T18:07:55Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr5e-05 | [0.7843][1] | [0.8125][2] | [0.7775][3] | [0.7964][4] | [0.7815][5] | 79.04 ± 1.27 |
| bs4-e10-lr3e-05 | [0.7749][6] | [0.7865][7] | [0.793][8] | [0.7664][9] | [0.7759][10] | 77.93 ± 0.93 |
| bs8-e10-lr3e-05 | [0.7686][11] | [0.7766][12] | [0.7793][13] | [0.7661][14] | [0.7835][15] | 77.48 ± 0.65 |
| bs4-e10-lr5e-05 | [0.7632][16] | [0.7716][17] | [0.7775][18] | [0.7742][19] | [0.7865][20] | 77.46 ± 0.76 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T22:29:19Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T17:40:23Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr5e-05 | [0.7843][1] | [0.8125][2] | [0.7775][3] | [0.7964][4] | [0.7815][5] | 79.04 ± 1.27 |
| bs4-e10-lr3e-05 | [0.7749][6] | [0.7865][7] | [0.793][8] | [0.7664][9] | [0.7759][10] | 77.93 ± 0.93 |
| bs8-e10-lr3e-05 | [0.7686][11] | [0.7766][12] | [0.7793][13] | [0.7661][14] | [0.7835][15] | 77.48 ± 0.65 |
| bs4-e10-lr5e-05 | [0.7632][16] | [0.7716][17] | [0.7775][18] | [0.7742][19] | [0.7865][20] | 77.46 ± 0.76 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T22:29:17Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fi",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T17:50:51Z |
---
language: fi
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia olisi
kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San Franciscon
näyttelyn puolesta .
---
# Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[Finnish NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr5e-05 | [0.7843][1] | [0.8125][2] | [0.7775][3] | [0.7964][4] | [0.7815][5] | 79.04 ± 1.27 |
| bs4-e10-lr3e-05 | [0.7749][6] | [0.7865][7] | [0.793][8] | [0.7664][9] | [0.7759][10] | 77.93 ± 0.93 |
| bs8-e10-lr3e-05 | [0.7686][11] | [0.7766][12] | [0.7793][13] | [0.7661][14] | [0.7835][15] | 77.48 ± 0.65 |
| bs4-e10-lr5e-05 | [0.7632][16] | [0.7716][17] | [0.7775][18] | [0.7742][19] | [0.7865][20] | 77.46 ± 0.76 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fi-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T22:29:05Z | 0 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T16:08:38Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.825][1] | [0.8248][2] | [0.8288][3] | [0.8309][4] | [0.8281][5] | 82.75 ± 0.23 |
| bs4-e10-lr3e-05 | [0.83][6] | [0.8345][7] | [0.8162][8] | [0.8223][9] | [0.8346][10] | 82.75 ± 0.72 |
| bs8-e10-lr5e-05 | [0.8237][11] | [0.8165][12] | [0.8189][13] | [0.8297][14] | [0.8283][15] | 82.34 ± 0.51 |
| bs4-e10-lr5e-05 | [0.8114][16] | [0.8061][17] | [0.8112][18] | [0.8131][19] | [0.8182][20] | 81.2 ± 0.39 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:29:05Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T15:08:51Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.825][1] | [0.8248][2] | [0.8288][3] | [0.8309][4] | [0.8281][5] | 82.75 ± 0.23 |
| bs4-e10-lr3e-05 | [0.83][6] | [0.8345][7] | [0.8162][8] | [0.8223][9] | [0.8346][10] | 82.75 ± 0.72 |
| bs8-e10-lr5e-05 | [0.8237][11] | [0.8165][12] | [0.8189][13] | [0.8297][14] | [0.8283][15] | 82.34 ± 0.51 |
| bs4-e10-lr5e-05 | [0.8114][16] | [0.8061][17] | [0.8112][18] | [0.8131][19] | [0.8182][20] | 81.2 ± 0.39 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T22:29:04Z | 0 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T16:51:48Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.825][1] | [0.8248][2] | [0.8288][3] | [0.8309][4] | [0.8281][5] | 82.75 ± 0.23 |
| bs4-e10-lr3e-05 | [0.83][6] | [0.8345][7] | [0.8162][8] | [0.8223][9] | [0.8346][10] | 82.75 ± 0.72 |
| bs8-e10-lr5e-05 | [0.8237][11] | [0.8165][12] | [0.8189][13] | [0.8297][14] | [0.8283][15] | 82.34 ± 0.51 |
| bs4-e10-lr5e-05 | [0.8114][16] | [0.8061][17] | [0.8112][18] | [0.8131][19] | [0.8182][20] | 81.2 ± 0.39 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:29:03Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T14:52:25Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.825][1] | [0.8248][2] | [0.8288][3] | [0.8309][4] | [0.8281][5] | 82.75 ± 0.23 |
| bs4-e10-lr3e-05 | [0.83][6] | [0.8345][7] | [0.8162][8] | [0.8223][9] | [0.8346][10] | 82.75 ± 0.72 |
| bs8-e10-lr5e-05 | [0.8237][11] | [0.8165][12] | [0.8189][13] | [0.8297][14] | [0.8283][15] | 82.34 ± 0.51 |
| bs4-e10-lr5e-05 | [0.8114][16] | [0.8061][17] | [0.8112][18] | [0.8131][19] | [0.8182][20] | 81.2 ± 0.39 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T22:29:03Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T15:52:02Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.825][1] | [0.8248][2] | [0.8288][3] | [0.8309][4] | [0.8281][5] | 82.75 ± 0.23 |
| bs4-e10-lr3e-05 | [0.83][6] | [0.8345][7] | [0.8162][8] | [0.8223][9] | [0.8346][10] | 82.75 ± 0.72 |
| bs8-e10-lr5e-05 | [0.8237][11] | [0.8165][12] | [0.8189][13] | [0.8297][14] | [0.8283][15] | 82.34 ± 0.51 |
| bs4-e10-lr5e-05 | [0.8114][16] | [0.8061][17] | [0.8112][18] | [0.8131][19] | [0.8182][20] | 81.2 ± 0.39 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
|
stefan-it
| 2023-10-17T22:29:02Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T16:35:15Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.825][1] | [0.8248][2] | [0.8288][3] | [0.8309][4] | [0.8281][5] | 82.75 ± 0.23 |
| bs4-e10-lr3e-05 | [0.83][6] | [0.8345][7] | [0.8162][8] | [0.8223][9] | [0.8346][10] | 82.75 ± 0.72 |
| bs8-e10-lr5e-05 | [0.8237][11] | [0.8165][12] | [0.8189][13] | [0.8297][14] | [0.8283][15] | 82.34 ± 0.51 |
| bs4-e10-lr5e-05 | [0.8114][16] | [0.8061][17] | [0.8112][18] | [0.8131][19] | [0.8182][20] | 81.2 ± 0.39 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T22:29:02Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T13:53:18Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.825][1] | [0.8248][2] | [0.8288][3] | [0.8309][4] | [0.8281][5] | 82.75 ± 0.23 |
| bs4-e10-lr3e-05 | [0.83][6] | [0.8345][7] | [0.8162][8] | [0.8223][9] | [0.8346][10] | 82.75 ± 0.72 |
| bs8-e10-lr5e-05 | [0.8237][11] | [0.8165][12] | [0.8189][13] | [0.8297][14] | [0.8283][15] | 82.34 ± 0.51 |
| bs4-e10-lr5e-05 | [0.8114][16] | [0.8061][17] | [0.8112][18] | [0.8131][19] | [0.8182][20] | 81.2 ± 0.39 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
|
stefan-it
| 2023-10-17T22:29:01Z | 1 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T14:35:54Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.825][1] | [0.8248][2] | [0.8288][3] | [0.8309][4] | [0.8281][5] | 82.75 ± 0.23 |
| bs4-e10-lr3e-05 | [0.83][6] | [0.8345][7] | [0.8162][8] | [0.8223][9] | [0.8346][10] | 82.75 ± 0.72 |
| bs8-e10-lr5e-05 | [0.8237][11] | [0.8165][12] | [0.8189][13] | [0.8297][14] | [0.8283][15] | 82.34 ± 0.51 |
| bs4-e10-lr5e-05 | [0.8114][16] | [0.8061][17] | [0.8112][18] | [0.8131][19] | [0.8182][20] | 81.2 ± 0.39 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
|
stefan-it
| 2023-10-17T22:29:01Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T15:35:34Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.825][1] | [0.8248][2] | [0.8288][3] | [0.8309][4] | [0.8281][5] | 82.75 ± 0.23 |
| bs4-e10-lr3e-05 | [0.83][6] | [0.8345][7] | [0.8162][8] | [0.8223][9] | [0.8346][10] | 82.75 ± 0.72 |
| bs8-e10-lr5e-05 | [0.8237][11] | [0.8165][12] | [0.8189][13] | [0.8297][14] | [0.8283][15] | 82.34 ± 0.51 |
| bs4-e10-lr5e-05 | [0.8114][16] | [0.8061][17] | [0.8112][18] | [0.8131][19] | [0.8182][20] | 81.2 ± 0.39 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
|
stefan-it
| 2023-10-17T22:29:00Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T12:38:11Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.825][1] | [0.8248][2] | [0.8288][3] | [0.8309][4] | [0.8281][5] | 82.75 ± 0.23 |
| bs4-e10-lr3e-05 | [0.83][6] | [0.8345][7] | [0.8162][8] | [0.8223][9] | [0.8346][10] | 82.75 ± 0.72 |
| bs8-e10-lr5e-05 | [0.8237][11] | [0.8165][12] | [0.8189][13] | [0.8297][14] | [0.8283][15] | 82.34 ± 0.51 |
| bs4-e10-lr5e-05 | [0.8114][16] | [0.8061][17] | [0.8112][18] | [0.8131][19] | [0.8182][20] | 81.2 ± 0.39 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
|
stefan-it
| 2023-10-17T22:29:00Z | 3 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T13:36:51Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.825][1] | [0.8248][2] | [0.8288][3] | [0.8309][4] | [0.8281][5] | 82.75 ± 0.23 |
| bs4-e10-lr3e-05 | [0.83][6] | [0.8345][7] | [0.8162][8] | [0.8223][9] | [0.8346][10] | 82.75 ± 0.72 |
| bs8-e10-lr5e-05 | [0.8237][11] | [0.8165][12] | [0.8189][13] | [0.8297][14] | [0.8283][15] | 82.34 ± 0.51 |
| bs4-e10-lr5e-05 | [0.8114][16] | [0.8061][17] | [0.8112][18] | [0.8131][19] | [0.8182][20] | 81.2 ± 0.39 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
hmteams/flair-hipe-2022-newseye-fr
|
hmteams
| 2023-10-17T22:28:59Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] |
token-classification
| 2023-10-17T15:22:05Z |
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: Le Moniteur universel fait ressortir les avantages de la situation de l '
Allemagne , sa force militaire , le peu d ' intérêts personnels qu ' elle peut
avoir dans la question d ' Orient .
---
# Fine-tuned Flair Model on French NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[French NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmTEAMS as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs8-e10-lr3e-05 | [0.825][1] | [0.8248][2] | [0.8288][3] | [0.8309][4] | [0.8281][5] | 82.75 ± 0.23 |
| bs4-e10-lr3e-05 | [0.83][6] | [0.8345][7] | [0.8162][8] | [0.8223][9] | [0.8346][10] | 82.75 ± 0.72 |
| bs8-e10-lr5e-05 | [0.8237][11] | [0.8165][12] | [0.8189][13] | [0.8297][14] | [0.8283][15] | 82.34 ± 0.51 |
| bs4-e10-lr5e-05 | [0.8114][16] | [0.8061][17] | [0.8112][18] | [0.8131][19] | [0.8182][20] | 81.2 ± 0.39 |
[1]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.