repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
PandaKata/distilbert-base-uncased-finetuned-imdb
PandaKata
distilbert
9
2
transformers
0
fill-mask
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,279
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4442 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6985 | 1.0 | 157 | 2.5612 | | 2.562 | 2.0 | 314 | 2.4226 | | 2.5316 | 3.0 | 471 | 2.4218 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
748b3bb4c640caac52339abaf39afae9
toastynews/electra-hongkongese-base-discriminator
toastynews
electra
6
175
transformers
0
null
true
true
false
apache-2.0
['yue']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,752
false
# ELECTRA Hongkongese Base ## Model description ELECTRA trained exclusively with data from Hong Kong. A signaficant amount of Hongkongese/Cantonese/Yue is included in the training data. ## Intended uses & limitations This model is an alternative to Chinese models. It may offer better performance for tasks catering to the langauge usage of Hong Kongers. Yue Wikipedia is used which is much smaller than Chinese Wikipedia; this model will lack the breath of knowledge compared to other Chinese models. #### How to use This is the base model trained from the official repo. Further finetuning will be needed for use on downstream tasks. Other model sizes are also available. #### Limitations and bias The training data consists of mostly news articles and blogs. There is probably a bias towards formal language usage. ## Training data The following is the list of data sources. Total characters is about 507M. | Data | % | | ------------------------------------------------- | --: | | News Articles / Blogs | 58% | | Yue Wikipedia / EVCHK | 18% | | Restaurant Reviews | 12% | | Forum Threads | 12% | | Online Fiction | 1% | The following is the distribution of different languages within the corpus. | Language | % | | ------------------------------------------------- | --: | | Standard Chinese | 62% | | Hongkongese | 30% | | English | 8% | ## Training procedure Model was trained on a single TPUv3 from the official repo with the default parameters. | Parameter | Value | | ------------------------------------------------ | ----: | | Batch Size | 256 | | Max Sequence Size | 512 | | Vocab Size | 30000 | *Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC)* ## Eval results Average evaluation task results over 10 runs. Comparison using the original repo model and code. Chinese models are available from [Joint Laboratory of HIT and iFLYTEK Research (HFL)](https://huggingface.co/hfl) | Model | DRCD (EM/F1) | openrice-senti | lihkg-cat | wordshk-sem | |:-----------:|:------------:|:--------------:|:---------:|:-----------:| | Chinese | 86.6 / 91.7 | 79.1 | 67.4 | 88.1 | | Hongkongese | 83.0 / 89.6 | 81.5 | 70.0 | 90.1 |
7c1e417b09eee8c89ef8e2fce1207056
Omerdor/128-DRUSEN-cell_
Omerdor
null
23
0
diffusers
0
null
false
false
false
apache-2.0
['en']
['imagefolder']
null
0
0
0
0
0
0
0
[]
false
true
true
1,193
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # 128-DRUSEN-cell_ ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `imagefolder` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-06 - train_batch_size: 64 - eval_batch_size: 4 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/Omerdor/128-DRUSEN-cell_/tensorboard?#scalars)
02431ec60cbafa9b3eb0fe9f79219a69
LeoCordoba/beto2beto-mlsum
LeoCordoba
encoder-decoder
11
24
transformers
0
summarization
true
false
false
apache-2.0
['es']
['mlsum - es']
null
0
0
0
0
0
0
0
['summarization', 'spanish', 'encoder-decoder', 'beto']
true
true
true
1,013
false
## beto2beto-mlsum This model was trained on the Spanish section of MLSum: https://paperswithcode.com/sota/abstractive-text-summarization-on-mlsum. ## Hyperparameters { "dataset_config": "es", "dataset_name": "mlsum", "do_eval": true, "do_predict": true, "do_train": true, "fp16": true, "max_target_length": 64, "num_train_epochs": 10, "per_device_eval_batch_size": 4, "per_device_train_batch_size": 4, "predict_with_generate": true, "sagemaker_container_log_level": 20, "sagemaker_program": "run_summarization.py", "seed": 7, "summary_column": "summary", "text_column": "text" } ## Usage ## Results | metric | score | | --- | ----- | | validation_loss | 2.5021677017211914 | | validation_rouge1 | 26.1256 | | validation_rouge2 | 9.2552 | | validation_rougeL | 21.4899 | | validation_rougeLsum | 21.8194 | | test_loss | 2.57672381401062 | | test_rouge1 | 25.8639 | | test_rouge2 | 8.911 | | test_rougeL | 21.2426 | | test_rougeLsum | 21.5859 |
67e87f3f5ec87e5dadcd0b8710a70e84
sd-concepts-library/tubby
sd-concepts-library
null
11
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,174
false
### tubby on Stable Diffusion This is the `<tubby>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<tubby> 0](https://huggingface.co/sd-concepts-library/tubby/resolve/main/concept_images/2.jpeg) ![<tubby> 1](https://huggingface.co/sd-concepts-library/tubby/resolve/main/concept_images/3.jpeg) ![<tubby> 2](https://huggingface.co/sd-concepts-library/tubby/resolve/main/concept_images/1.jpeg) ![<tubby> 3](https://huggingface.co/sd-concepts-library/tubby/resolve/main/concept_images/5.jpeg) ![<tubby> 4](https://huggingface.co/sd-concepts-library/tubby/resolve/main/concept_images/4.jpeg) ![<tubby> 5](https://huggingface.co/sd-concepts-library/tubby/resolve/main/concept_images/0.jpeg)
db6144e3789aec84813c50d17a18e923
Hemlok/LonganMix
Hemlok
null
20
243
diffusers
50
text-to-image
false
false
false
other
null
null
null
1
1
0
0
0
0
0
['stable-diffusion', 'text-to-image']
false
true
true
2,137
false
# 『Longan Mix』 <img src="https://i.imgur.com/MQ1Krv6.png" width="1024" height=""> "Longan Mix" is a merged mix based on "7th_Layer(https://huggingface.co/syaimu/7th_Layer) ". ---- # ◆Discord [Join Discord Server](https://discord.gg/eN6aSWRddT) - The merged model community of Hemlok. ---- # ◆About - This model is designed with "anime-style" and "cute" in mind. - Realistic models are less assertive. - Sampler: DDIM or DPM++ SDE Karras - Steps: 20~ - Clipskip: 2 - CFG Scale: 5-12 - Denoise strength: 0.5-0.7(As you like) - Negative prompts are recommended for "7th_Layer". - vae: As you wish. (Any etc. If not used, color may become lighter) ---- # ◆Colab Note [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Blvf6pxo3dyh94BJ3zkKuti9EqKr6gCA?usp=share_link) - (I have not checked the operation but it probably works.) ---- # ◆Comparison <img src="https://i.imgur.com/ryxZy77.png" width="1700" height=""> <img src="https://i.imgur.com/RG9dBLK.png" width="1700" height=""> ``` (masterpiece:1.2), (best quality:1.2), (((kawaii))), smile, cowboy shot, (delicate sunlight composition) 8, downtown, 1girl, solo, looking at viewer, full body, (silver long hair), (buzz cut), (shining blue eyes), (beauty detailed eye), ``` ---- <img src="https://i.imgur.com/y76oFHc.jpg" width="1700" height=""> <img src="https://i.imgur.com/pKLjYCt.jpg" width="1700" height=""> ``` (masterpiece:1.2), (best quality:1.2), (morning), (school), 1girl, solo, looking at viewer, cowboy shot, (school uniform), smile, black hair, stockings ``` ---- # ◆Sampler & CFG Scale <img src="https://i.imgur.com/yVeYk9f.jpg" width="1700" height=""> <img src="https://i.imgur.com/jneg6gY.jpg" width="1700" height=""> ``` (masterpiece:1.2), (best quality:1.2), kawaii, winter, ((street)), ((building)), (noon), 1girl, solo, looking at viewer, ((maid uniform)), (twintails), long hair, blonde hair, smile, [small breast], shiny skin, ``` ---- # Disclaimer NSFW images have not been verified. The creation of SFW and NSFW images is at the discretion of the individual creator. ----
4f531dbce4e5100ccabf31e79e56221a
trevorj/BART_reddit_other
trevorj
bart
13
0
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,651
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BART_reddit_other This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5792 - Rouge1: 18.5705 - Rouge2: 5.0107 - Rougel: 15.2581 - Rougelsum: 16.082 - Gen Len: 19.402 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 3.7887 | 1.0 | 1875 | 3.6044 | 18.4668 | 5.182 | 15.359 | 16.169 | 19.341 | | 3.3816 | 2.0 | 3750 | 3.5628 | 18.0998 | 4.8937 | 15.0179 | 15.7615 | 17.789 | | 3.134 | 3.0 | 5625 | 3.5792 | 18.5705 | 5.0107 | 15.2581 | 16.082 | 19.402 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
9fd418493d132c041e62f228cc8d37fe
eikoenchine/xlm-roberta-base-finetuned-panx-de-fr
eikoenchine
xlm-roberta
10
5
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,314
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1651 - F1: 0.8578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.211 | 1.0 | 715 | 0.1834 | 0.8266 | | 0.1447 | 2.0 | 1430 | 0.1624 | 0.8464 | | 0.0933 | 3.0 | 2145 | 0.1651 | 0.8578 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.7.0 - Tokenizers 0.12.1
c4c7bcda683acc2a3afc3c03e1046e8c
muhtasham/tiny-mlm-glue-cola-from-scratch-custom-tokenizer-expand-vocab-target-glue-qnli
muhtasham
bert
11
4
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,940
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-mlm-glue-cola-from-scratch-custom-tokenizer-expand-vocab-target-glue-qnli This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-cola-from-scratch-custom-tokenizer-expand-vocab](https://huggingface.co/muhtasham/tiny-mlm-glue-cola-from-scratch-custom-tokenizer-expand-vocab) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6582 - Accuracy: 0.6053 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6936 | 0.15 | 500 | 0.6930 | 0.5444 | | 0.6928 | 0.31 | 1000 | 0.6893 | 0.5737 | | 0.6786 | 0.46 | 1500 | 0.6640 | 0.5891 | | 0.6685 | 0.61 | 2000 | 0.6632 | 0.5938 | | 0.6651 | 0.76 | 2500 | 0.6606 | 0.6008 | | 0.6636 | 0.92 | 3000 | 0.6596 | 0.6021 | | 0.6589 | 1.07 | 3500 | 0.6590 | 0.5993 | | 0.659 | 1.22 | 4000 | 0.6583 | 0.6074 | | 0.6557 | 1.37 | 4500 | 0.6601 | 0.6077 | | 0.6542 | 1.53 | 5000 | 0.6582 | 0.6053 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2
81fe048eeded0150be3214494b77cdc7
Geotrend/bert-base-lt-cased
Geotrend
bert
8
2
transformers
0
fill-mask
true
true
true
apache-2.0
['lt']
['wikipedia']
null
0
0
0
0
0
0
0
[]
false
true
true
1,283
false
# bert-base-lt-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-lt-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-lt-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
0cdf48943e39af6bc8750e11323b52b5
gokuls/bert-base-uncased-rte
gokuls
bert
17
63
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,735
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-rte This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.6540 - Accuracy: 0.6065 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7009 | 1.0 | 20 | 0.6781 | 0.5560 | | 0.6393 | 2.0 | 40 | 0.6540 | 0.6065 | | 0.4606 | 3.0 | 60 | 0.7134 | 0.6498 | | 0.2597 | 4.0 | 80 | 0.8379 | 0.6751 | | 0.1492 | 5.0 | 100 | 1.3531 | 0.6282 | | 0.0954 | 6.0 | 120 | 1.2220 | 0.6354 | | 0.0561 | 7.0 | 140 | 1.2282 | 0.6715 | | 0.0379 | 8.0 | 160 | 1.4368 | 0.6679 | | 0.0368 | 9.0 | 180 | 1.8559 | 0.6498 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
132baa72e1a2830b6d4ad32fde24b7f0
dchaplinsky/uk_ner_web_trf_base
dchaplinsky
null
16
9
spacy
2
token-classification
false
false
false
mit
['uk']
['ner-uk']
null
0
0
0
0
0
0
0
['spacy', 'token-classification']
false
true
true
832
false
# uk_ner_web_trf_base ## Model description **uk_ner_web_trf_base** is a fine-tuned [XLM-Roberta model](https://huggingface.co/xlm-roberta-base) that is ready to use for **Named Entity Recognition** and achieves a performance close to **SoA** for the NER task for Ukrainian language. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PERS) and Miscellaneous (MISC). The model was fine-tuned on the [NER-UK dataset](https://github.com/lang-uk/ner-uk), released by the [lang-uk](https://lang.org.ua). A bigger model, trained on xlm-roberta-large with the **State-of-the-Art** performance is available [here](https://huggingface.co/dchaplinsky/uk_ner_web_trf_large). Copyright: [Dmytro Chaplynskyi](https://twitter.com/dchaplinsky), [lang-uk project](https://lang.org.ua), 2022
9d9bf0dc595cb9fc9f91cdbd9cda64bf
jonatasgrosman/exp_w2v2t_sv-se_hubert_s805
jonatasgrosman
hubert
10
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['sv-SE']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'sv-SE']
false
true
true
458
false
# exp_w2v2t_sv-se_hubert_s805 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
bc613334f5a4062561aa573b8cc20594
muhtasham/small-mlm-glue-qqp-target-glue-mnli
muhtasham
bert
10
4
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,811
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-qqp-target-glue-mnli This model is a fine-tuned version of [muhtasham/small-mlm-glue-qqp](https://huggingface.co/muhtasham/small-mlm-glue-qqp) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6551 - Accuracy: 0.7219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - training_steps: 5000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9185 | 0.04 | 500 | 0.8285 | 0.6395 | | 0.8182 | 0.08 | 1000 | 0.7859 | 0.6628 | | 0.7779 | 0.12 | 1500 | 0.7475 | 0.6761 | | 0.7565 | 0.16 | 2000 | 0.7283 | 0.6913 | | 0.7477 | 0.2 | 2500 | 0.7180 | 0.6929 | | 0.7376 | 0.24 | 3000 | 0.7028 | 0.6964 | | 0.7185 | 0.29 | 3500 | 0.6840 | 0.7137 | | 0.7051 | 0.33 | 4000 | 0.6747 | 0.7190 | | 0.6785 | 0.37 | 4500 | 0.6846 | 0.7192 | | 0.685 | 0.41 | 5000 | 0.6551 | 0.7219 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
80b8fde502818cdfe64b3cc4f43c3dfd
kjunelee/bert-base-uncased-issues-128
kjunelee
bert
10
2
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,918
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2314 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.193 | 1.0 | 146 | 1.7004 | | 1.7081 | 2.0 | 292 | 1.4895 | | 1.5458 | 3.0 | 438 | 1.4427 | | 1.4715 | 4.0 | 584 | 1.4081 | | 1.3944 | 5.0 | 730 | 1.3163 | | 1.3396 | 6.0 | 876 | 1.3200 | | 1.2945 | 7.0 | 1022 | 1.2785 | | 1.2652 | 8.0 | 1168 | 1.2473 | | 1.2332 | 9.0 | 1314 | 1.2321 | | 1.2042 | 10.0 | 1460 | 1.2162 | | 1.204 | 11.0 | 1606 | 1.1781 | | 1.1866 | 12.0 | 1752 | 1.2211 | | 1.1592 | 13.0 | 1898 | 1.2801 | | 1.1503 | 14.0 | 2044 | 1.1768 | | 1.1268 | 15.0 | 2190 | 1.1657 | | 1.1521 | 16.0 | 2336 | 1.2314 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0 - Datasets 2.2.3.dev0 - Tokenizers 0.12.1
1f79df289d8f25bc4590dda2092898e4
globuslabs/ScholarBERT_10_WB
globuslabs
bert
8
2
transformers
0
fill-mask
true
false
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['science', 'multi-displinary']
false
true
true
2,015
false
# ScholarBERT_10_WB Model This is the **ScholarBERT_10_WB** variant of the ScholarBERT model family. The model is pretrained on a large collection of scientific research articles (**22.1B tokens**). Additionally, the pretraining data also includes the Wikipedia+BookCorpus, which are used to pretrain the [BERT-base](https://huggingface.co/bert-base-cased) and [BERT-large](https://huggingface.co/bert-large-cased) models. This is a **cased** (case-sensitive) model. The tokenizer will not convert all inputs to lower-case by default. The model is based on the same architecture as [BERT-large](https://huggingface.co/bert-large-cased) and has a total of 340M parameters. # Model Architecture | Hyperparameter | Value | |-----------------|:-------:| | Layers | 24 | | Hidden Size | 1024 | | Attention Heads | 16 | | Total Parameters | 340M | # Training Dataset The vocab and the model are pertrained on **10% of the PRD** scientific literature dataset and Wikipedia+BookCorpus. The PRD dataset is provided by Public.Resource.Org, Inc. (“Public Resource”), a nonprofit organization based in California. This dataset was constructed from a corpus of journal article files, from which We successfully extracted text from 75,496,055 articles from 178,928 journals. The articles span across Arts & Humanities, Life Sciences & Biomedicine, Physical Sciences, Social Sciences, and Technology. The distribution of articles is shown below. ![corpus pie chart](https://huggingface.co/globuslabs/ScholarBERT/resolve/main/corpus_pie_chart.png) # BibTeX entry and citation info If using this model, please cite this paper: ``` @misc{hong2022scholarbert, doi = {10.48550/ARXIV.2205.11342}, url = {https://arxiv.org/abs/2205.11342}, author = {Hong, Zhi and Ajith, Aswathy and Pauloski, Gregory and Duede, Eamon and Malamud, Carl and Magoulas, Roger and Chard, Kyle and Foster, Ian}, title = {ScholarBERT: Bigger is Not Always Better}, publisher = {arXiv}, year = {2022} } ```
f9d79a5d34bf868e6936fe107f4637d8
facebook/opt-iml-max-1.3b
facebook
opt
9
2,818
transformers
19
text-generation
true
false
false
other
null
null
null
0
0
0
0
0
0
0
['text-generation', 'opt']
false
true
true
2,363
false
# OPT-IML ## Model Description [OPT-IML (OPT + Instruction Meta-Learning)](https://arxiv.org/abs/2212.12017) is a set of instruction-tuned versions of OPT, on a collection of ~2000 NLP tasks gathered from 8 NLP benchmarks, called OPT-IML Bench. We provide two model versions: * OPT-IML trained on 1500 tasks with several tasks held-out for purposes of downstream evaluation, and * OPT-IML-Max trained on all ~2000 tasks ### How to use You can use this model directly with a pipeline for text generation. ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model="facebook/opt-iml-max-1.3b") >>> generator("What is the capital of USA?") ``` ### Limitations and bias While OPT-IML models outperform baseline OPT on an extensive set of evaluations, nevertheless, they are susceptible to the various risks associated with using large language models relating to factual correctness, generation of toxic language and enforcing stereotypes. While we release our OPT-IML models to proliferate future work on instruction-tuning and to improve the availability of large instruction-tuned causal LMs, the use of these models should be accompanied with responsible best practices. ## Training data OPT-IML models are trained on OPT-IML Bench, a large benchmark for Instruction MetaLearning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks include Super-NaturalInstructions, FLAN, PromptSource, etc. ## Training procedure The texts are tokenized using the GPT2 byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens. The 30B model was fine-tuned on 64 40GB A100 GPUs. During fine-tuning, models saw approximately 2 billion tokens, which is only 0.6% of the pre-training budget of OPT. ### BibTeX entry and citation info ```bibtex @misc{iyer2022opt, title={OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization}, author={Iyer, Srinivasan and Lin, Xi Victoria and Pasunuru, Ramakanth and Mihaylov, Todor and Simig, D{\'a}niel and Yu, Ping and Shuster, Kurt and Wang, Tianlu and Liu, Qing and Koura, Punit Singh and others}, year={2022}, eprint={2212.12017}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
488578f5e48997be1dda89e5ac23099d
SetFit/distilbert-base-uncased__sst2__train-16-2
SetFit
distilbert
10
5
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,323
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-16-2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6748 - Accuracy: 0.6315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7043 | 1.0 | 7 | 0.7054 | 0.2857 | | 0.6711 | 2.0 | 14 | 0.7208 | 0.2857 | | 0.6311 | 3.0 | 21 | 0.7365 | 0.2857 | | 0.551 | 4.0 | 28 | 0.7657 | 0.5714 | | 0.5599 | 5.0 | 35 | 0.6915 | 0.5714 | | 0.3167 | 6.0 | 42 | 0.7134 | 0.5714 | | 0.2489 | 7.0 | 49 | 0.7892 | 0.5714 | | 0.1985 | 8.0 | 56 | 0.6756 | 0.7143 | | 0.0864 | 9.0 | 63 | 0.8059 | 0.5714 | | 0.0903 | 10.0 | 70 | 0.8165 | 0.7143 | | 0.0429 | 11.0 | 77 | 0.7947 | 0.7143 | | 0.0186 | 12.0 | 84 | 0.8570 | 0.7143 | | 0.0146 | 13.0 | 91 | 0.9346 | 0.7143 | | 0.011 | 14.0 | 98 | 0.9804 | 0.7143 | | 0.0098 | 15.0 | 105 | 1.0136 | 0.7143 | | 0.0086 | 16.0 | 112 | 1.0424 | 0.7143 | | 0.0089 | 17.0 | 119 | 1.0736 | 0.7143 | | 0.0068 | 18.0 | 126 | 1.0808 | 0.7143 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
9aa3119849ffc16bd0bddcaeb5afb98a
juancavallotti/bert-zs-sentence-classifier
juancavallotti
bert
16
2
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
5,338
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-zs-sentence-classifier This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3663 - F1: 0.8483 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.5973 | 0.01 | 500 | 0.5186 | 0.7538 | | 0.5021 | 0.03 | 1000 | 0.4646 | 0.7996 | | 0.4741 | 0.04 | 1500 | 0.4634 | 0.8064 | | 0.4656 | 0.06 | 2000 | 0.4485 | 0.8142 | | 0.4567 | 0.07 | 2500 | 0.4345 | 0.8160 | | 0.4448 | 0.09 | 3000 | 0.4239 | 0.8228 | | 0.4403 | 0.1 | 3500 | 0.4155 | 0.8294 | | 0.4163 | 0.12 | 4000 | 0.4021 | 0.8290 | | 0.4205 | 0.13 | 4500 | 0.4057 | 0.8283 | | 0.416 | 0.14 | 5000 | 0.4049 | 0.8319 | | 0.4115 | 0.16 | 5500 | 0.4095 | 0.8280 | | 0.4156 | 0.17 | 6000 | 0.3927 | 0.8349 | | 0.4042 | 0.19 | 6500 | 0.4003 | 0.8392 | | 0.4057 | 0.2 | 7000 | 0.3929 | 0.8385 | | 0.3977 | 0.22 | 7500 | 0.3915 | 0.8406 | | 0.4049 | 0.23 | 8000 | 0.3785 | 0.8433 | | 0.4027 | 0.24 | 8500 | 0.3807 | 0.8424 | | 0.4096 | 0.26 | 9000 | 0.3768 | 0.8435 | | 0.3958 | 0.27 | 9500 | 0.3846 | 0.8420 | | 0.4037 | 0.29 | 10000 | 0.3808 | 0.8381 | | 0.3813 | 0.3 | 10500 | 0.4004 | 0.8415 | | 0.3934 | 0.32 | 11000 | 0.3821 | 0.8422 | | 0.3895 | 0.33 | 11500 | 0.3844 | 0.8428 | | 0.3907 | 0.35 | 12000 | 0.3847 | 0.8435 | | 0.3862 | 0.36 | 12500 | 0.3803 | 0.8431 | | 0.3958 | 0.37 | 13000 | 0.3739 | 0.8392 | | 0.3845 | 0.39 | 13500 | 0.3817 | 0.8422 | | 0.3914 | 0.4 | 14000 | 0.3857 | 0.8424 | | 0.3814 | 0.42 | 14500 | 0.3793 | 0.8438 | | 0.3816 | 0.43 | 15000 | 0.3843 | 0.8395 | | 0.4022 | 0.45 | 15500 | 0.3737 | 0.8436 | | 0.3879 | 0.46 | 16000 | 0.3750 | 0.8424 | | 0.3794 | 0.48 | 16500 | 0.3743 | 0.8410 | | 0.393 | 0.49 | 17000 | 0.3733 | 0.8461 | | 0.384 | 0.5 | 17500 | 0.3765 | 0.8476 | | 0.3782 | 0.52 | 18000 | 0.3748 | 0.8451 | | 0.3931 | 0.53 | 18500 | 0.3807 | 0.8454 | | 0.3889 | 0.55 | 19000 | 0.3653 | 0.8463 | | 0.386 | 0.56 | 19500 | 0.3707 | 0.8445 | | 0.3802 | 0.58 | 20000 | 0.3700 | 0.8474 | | 0.3883 | 0.59 | 20500 | 0.3646 | 0.8463 | | 0.3825 | 0.61 | 21000 | 0.3665 | 0.8513 | | 0.382 | 0.62 | 21500 | 0.3620 | 0.8508 | | 0.3795 | 0.63 | 22000 | 0.3692 | 0.8493 | | 0.367 | 0.65 | 22500 | 0.3704 | 0.8479 | | 0.3825 | 0.66 | 23000 | 0.3723 | 0.8472 | | 0.3902 | 0.68 | 23500 | 0.3681 | 0.8465 | | 0.3813 | 0.69 | 24000 | 0.3668 | 0.8515 | | 0.3878 | 0.71 | 24500 | 0.3632 | 0.8506 | | 0.3743 | 0.72 | 25000 | 0.3728 | 0.8463 | | 0.3826 | 0.73 | 25500 | 0.3746 | 0.8465 | | 0.3892 | 0.75 | 26000 | 0.3602 | 0.8518 | | 0.3767 | 0.76 | 26500 | 0.3722 | 0.8513 | | 0.3724 | 0.78 | 27000 | 0.3716 | 0.8499 | | 0.3767 | 0.79 | 27500 | 0.3651 | 0.8483 | | 0.3846 | 0.81 | 28000 | 0.3753 | 0.8493 | | 0.3748 | 0.82 | 28500 | 0.3720 | 0.8458 | | 0.3768 | 0.84 | 29000 | 0.3663 | 0.8508 | | 0.3716 | 0.85 | 29500 | 0.3635 | 0.8531 | | 0.3673 | 0.86 | 30000 | 0.3659 | 0.8485 | | 0.3805 | 0.88 | 30500 | 0.3608 | 0.8518 | | 0.3718 | 0.89 | 31000 | 0.3695 | 0.8520 | | 0.374 | 0.91 | 31500 | 0.3631 | 0.8485 | | 0.3871 | 0.92 | 32000 | 0.3659 | 0.8485 | | 0.3724 | 0.94 | 32500 | 0.3584 | 0.8518 | | 0.3756 | 0.95 | 33000 | 0.3587 | 0.8492 | | 0.3709 | 0.97 | 33500 | 0.3700 | 0.8488 | | 0.376 | 0.98 | 34000 | 0.3657 | 0.8492 | | 0.372 | 0.99 | 34500 | 0.3663 | 0.8483 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
858d0aecc9dae03b9ce15199b11d12d1
Geotrend/bert-base-da-cased
Geotrend
bert
8
2
transformers
0
fill-mask
true
true
true
apache-2.0
['da']
['wikipedia']
null
0
0
0
0
0
0
0
[]
false
true
true
1,283
false
# bert-base-da-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-da-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-da-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
cfbefcebd8aefd21a491a4643cf92d17
keras-io/ppo-cartpole
keras-io
null
7
4
keras
0
null
false
false
false
['cc0-1.0']
null
null
null
0
0
0
0
0
0
0
['reinforcement learning', 'proximal policy optimization']
false
true
true
1,719
false
## Keras Implementation of Proximal Policy Optimization on Cartpole Environment 🔨🤖 This repo contains the model and the notebook [to this Keras example on PPO for Cartpole](https://keras.io/examples/rl/ppo_cartpole/). Full credits to: Ilias Chrysovergis ![cartpole_gif](https://i.imgur.com/tKhTEaF.gif) ## Background Information ### CartPole-v0 A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. After 200 steps the episode ends. Thus, the highest return we can get is equal to 200. ### Proximal Policy Optimization PPO is a policy gradient method and can be used for environments with either discrete or continuous action spaces. It trains a stochastic policy in an on-policy way. Also, it utilizes the actor critic method. The actor maps the observation to an action and the critic gives an expectation of the rewards of the agent for the observation given. Firstly, it collects a set of trajectories for each epoch by sampling from the latest version of the stochastic policy. Then, the rewards-to-go and the advantage estimates are computed in order to update the policy and fit the value function. The policy is updated via a stochastic gradient ascent optimizer, while the value function is fitted via some gradient descent algorithm. This procedure is applied for many epochs until the environment is solved.
fd40c8135a73f1939fc9e629d78e7028
DataikuNLP/distiluse-base-multilingual-cased-v1
DataikuNLP
distilbert
14
2
sentence-transformers
0
sentence-similarity
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
true
true
2,424
false
# DataikuNLP/distiluse-base-multilingual-cased-v1 **This model is a copy of [this model repository](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) from sentence-transformers at the specific commit `3a706e4d65c04f868c4684adfd4da74141be8732`.** This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/distiluse-base-multilingual-cased-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/distiluse-base-multilingual-cased-v1) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
52fa5d85a80ed439939ecbb1bb6ee62e
jayanta/mit-b2-fv-finetuned-memes
jayanta
segformer
12
9
transformers
0
image-classification
true
false
false
other
null
['imagefolder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,198
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mit-b2-fv-finetuned-memes This model is a fine-tuned version of [nvidia/mit-b2](https://huggingface.co/nvidia/mit-b2) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5984 - Accuracy: 0.8323 - Precision: 0.8312 - Recall: 0.8323 - F1: 0.8315 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00012 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.3683 | 0.99 | 20 | 1.1798 | 0.5703 | 0.4914 | 0.5703 | 0.4915 | | 1.0113 | 1.99 | 40 | 1.0384 | 0.6159 | 0.6813 | 0.6159 | 0.6274 | | 0.7581 | 2.99 | 60 | 0.8348 | 0.6808 | 0.7377 | 0.6808 | 0.6840 | | 0.6241 | 3.99 | 80 | 0.6034 | 0.7713 | 0.7864 | 0.7713 | 0.7735 | | 0.4999 | 4.99 | 100 | 0.5481 | 0.7944 | 0.8000 | 0.7944 | 0.7909 | | 0.3981 | 5.99 | 120 | 0.5253 | 0.8022 | 0.8091 | 0.8022 | 0.8000 | | 0.3484 | 6.99 | 140 | 0.4688 | 0.8238 | 0.8147 | 0.8238 | 0.8146 | | 0.3142 | 7.99 | 160 | 0.6245 | 0.7867 | 0.8209 | 0.7867 | 0.7920 | | 0.2339 | 8.99 | 180 | 0.5053 | 0.8362 | 0.8426 | 0.8362 | 0.8355 | | 0.2284 | 9.99 | 200 | 0.5070 | 0.8230 | 0.8220 | 0.8230 | 0.8187 | | 0.1824 | 10.99 | 220 | 0.5780 | 0.8006 | 0.8138 | 0.8006 | 0.8035 | | 0.1561 | 11.99 | 240 | 0.5429 | 0.8253 | 0.8197 | 0.8253 | 0.8218 | | 0.1229 | 12.99 | 260 | 0.5325 | 0.8331 | 0.8296 | 0.8331 | 0.8303 | | 0.1232 | 13.99 | 280 | 0.5595 | 0.8277 | 0.8290 | 0.8277 | 0.8273 | | 0.118 | 14.99 | 300 | 0.5974 | 0.8292 | 0.8345 | 0.8292 | 0.8299 | | 0.11 | 15.99 | 320 | 0.5796 | 0.8253 | 0.8228 | 0.8253 | 0.8231 | | 0.0948 | 16.99 | 340 | 0.5581 | 0.8346 | 0.8358 | 0.8346 | 0.8349 | | 0.0985 | 17.99 | 360 | 0.5700 | 0.8338 | 0.8301 | 0.8338 | 0.8318 | | 0.0821 | 18.99 | 380 | 0.5756 | 0.8331 | 0.8343 | 0.8331 | 0.8335 | | 0.0813 | 19.99 | 400 | 0.5984 | 0.8323 | 0.8312 | 0.8323 | 0.8315 | ### Framework versions - Transformers 4.24.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.6.1.dev0 - Tokenizers 0.13.1
625ab5b634469adbd4a317cba3f0357a
Geotrend/distilbert-base-en-pt-cased
Geotrend
distilbert
6
3
transformers
0
fill-mask
true
false
false
apache-2.0
['multilingual']
['wikipedia']
null
1
1
0
0
0
0
0
[]
false
true
true
1,224
false
# distilbert-base-en-pt-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-pt-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-pt-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermdistilbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
4f656d23c6179bd2343aeb4af96926e4
stjiris/bert-large-portuguese-cased-legal-tsdae
stjiris
bert
13
7
sentence-transformers
2
fill-mask
true
false
false
mit
['pt']
['rufimelo/PortugueseLegalSentences-v0']
null
0
0
0
0
0
0
0
['sentence-transformers', 'transformers', 'bert', 'pytorch']
false
true
true
4,966
false
[![INESC-ID](https://www.inesc-id.pt/wp-content/uploads/2019/06/INESC-ID-logo_01.png)](https://www.inesc-id.pt/projects/PR07005/) [![A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/_static/logo.png)](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/) Work developed as part of [Project IRIS](https://www.inesc-id.pt/projects/PR07005/). Thesis: [A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/) # stjiris/bert-large-portuguese-cased-legal-tsdae (Legal BERTimbau) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. stjiris/bert-large-portuguese-cased-legal-tsdae derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large. It was trained using the TSDAE technique with a learning rate 1e-5 [Legal Sentences from +-30000 documents](https://huggingface.co/datasets/stjiris/portuguese-legal-sentences-v1.0) 212k training steps (best performance for our semantic search system implementation) ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Isto é um exemplo", "Isto é um outro exemplo"] model = SentenceTransformer('stjiris/bert-large-portuguese-cased-legal-tsdae') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('stjiris/bert-large-portuguese-cased-legal-tsdae') model = AutoModel.from_pretrained('stjiris/bert-large-portuguese-cased-legal-tsdae') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1028, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors ### Contributions [@rufimelo99](https://github.com/rufimelo99) If you use this work, please cite: ```bibtex @inproceedings{MeloSemantic, author = {Melo, Rui and Santos, Professor Pedro Alexandre and Dias, Professor Jo{\~ a}o}, title = {A {Semantic} {Search} {System} for {Supremo} {Tribunal} de {Justi}{\c c}a}, } @inproceedings{souza2020bertimbau, author = {F{\'a}bio Souza and Rodrigo Nogueira and Roberto Lotufo}, title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese}, booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)}, year = {2020} } @inproceedings{fonseca2016assin, title={ASSIN: Avaliacao de similaridade semantica e inferencia textual}, author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S}, booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal}, pages={13--15}, year={2016} } @inproceedings{real2020assin, title={The assin 2 shared task: a quick overview}, author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo}, booktitle={International Conference on Computational Processing of the Portuguese Language}, pages={406--412}, year={2020}, organization={Springer} } @InProceedings{huggingface:dataset:stsb_multi_mt, title = {Machine translated multilingual STS benchmark dataset.}, author={Philip May}, year={2021}, url={https://github.com/PhilipMay/stsb-multi-mt} } ```
ac1c73d919ac0c44df7cd18d8694b6fe
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-8
anas-awadalla
bert
16
5
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,001
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-128-finetuned-squad-seed-8 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
1bfbf7a4da426c5e3c9fe4a2c7c14587
KarelDO/gpt2.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_42
KarelDO
gpt2
15
2
transformers
0
null
true
false
false
mit
['en']
['OpenTable']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,106
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_42 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the OpenTable OPENTABLE-ABSA dataset. It achieves the following results on the evaluation set: - Loss: 0.4726 - Accuracy: 0.8311 - Macro-f1: 0.8295 - Weighted-macro-f1: 0.8313 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.5.2 - Tokenizers 0.12.1
1f401d3f63c650e292bb3fd58695e988
hassnain/wav2vec2-base-timit-demo-colab2
hassnain
wav2vec2
19
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,462
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2355 - Wer: 0.7320 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 60 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.851 | 13.89 | 500 | 3.1260 | 1.0 | | 1.9721 | 27.78 | 1000 | 1.2435 | 0.7992 | | 0.5749 | 41.67 | 1500 | 1.1662 | 0.7374 | | 0.291 | 55.56 | 2000 | 1.2355 | 0.7320 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
b6aaf64a5a54af6c2f2e322ac456333f
Sohaibsyed/wav2vec2-large-xls-r-300m-turkish-colab
Sohaibsyed
wav2vec2
13
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,791
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3717 - Wer: 0.2972 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.0139 | 3.67 | 400 | 0.7020 | 0.7112 | | 0.4129 | 7.34 | 800 | 0.4162 | 0.4503 | | 0.1869 | 11.01 | 1200 | 0.4174 | 0.3959 | | 0.1273 | 14.68 | 1600 | 0.4020 | 0.3695 | | 0.0959 | 18.35 | 2000 | 0.4026 | 0.3545 | | 0.0771 | 22.02 | 2400 | 0.3904 | 0.3361 | | 0.0614 | 25.69 | 2800 | 0.3736 | 0.3127 | | 0.0486 | 29.36 | 3200 | 0.3717 | 0.2972 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
0ddc896de64184a211e6af75983a14a2
sashketka/roberta-finetuned-subjqa-books_2
sashketka
roberta
13
12
transformers
0
question-answering
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
950
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-subjqa-books_2 This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
d03892c013f617c6cd6bd4594a0cd6fe
jjglilleberg/xlm-roberta-base-finetuned-panx-all
jjglilleberg
xlm-roberta
9
3
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,417
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1917 - F1: 0.8522 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 1251 | 0.2072 | 0.8084 | | No log | 2.0 | 2502 | 0.1911 | 0.8350 | | No log | 3.0 | 3753 | 0.1917 | 0.8522 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
3acf224ac6633584c95e2d93cf945b15
Helsinki-NLP/opus-mt-ja-tr
Helsinki-NLP
marian
11
10
transformers
0
translation
true
true
false
apache-2.0
['ja', 'tr']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,125
false
### jpn-tur * source group: Japanese * target group: Turkish * OPUS readme: [jpn-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-tur/README.md) * model: transformer-align * source language(s): jpn jpn_Bopo jpn_Hang jpn_Hani jpn_Hira jpn_Kana jpn_Yiii * target language(s): tur * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.jpn.tur | 16.7 | 0.434 | ### System Info: - hf_name: jpn-tur - source_languages: jpn - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/jpn-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ja', 'tr'] - src_constituents: {'jpn_Hang', 'jpn', 'jpn_Yiii', 'jpn_Kana', 'jpn_Hani', 'jpn_Bopo', 'jpn_Latn', 'jpn_Hira'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/jpn-tur/opus-2020-06-17.test.txt - src_alpha3: jpn - tgt_alpha3: tur - short_pair: ja-tr - chrF2_score: 0.434 - bleu: 16.7 - brevity_penalty: 0.932 - ref_len: 4755.0 - src_name: Japanese - tgt_name: Turkish - train_date: 2020-06-17 - src_alpha2: ja - tgt_alpha2: tr - prefer_old: False - long_pair: jpn-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
6bd173b8f18aacaa0e5b74b78670feee
Alireza1044/mobilebert_QNLI
Alireza1044
mobilebert
17
15
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,017
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qnli This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3731 - Accuracy: 0.9068 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
37a3ad98f70136dab1f01535e1205865
shibing624/songnet-base-chinese-couplet
shibing624
null
5
0
null
1
null
true
false
false
apache-2.0
['zh']
null
null
0
0
0
0
0
0
0
['SongNet', 'pytorch', 'zh', 'Text2Text-Generation']
false
true
true
2,778
false
# SongNet for Chinese Couplet(songnet-base-chinese-couplet) Model SongNet中文对联仿写模型 `songnet-base-chinese-couplet` evaluate couplet test data: The overall performance of SongNet on couplet **test**: |input_text|predict| |:--- |:--- | |一句相思吟岁月,千杯美酒醉风情|一生只剩诗和酒,满腹无关雪与梅| 在Couplet测试集上生成结果满足字数相同、词性对齐、词面对齐、形似要求,针对性的SongNet网络结构,在语义对仗工整和平仄合律上的效果明显优于T5和GPT2等模型。 SongNet的网络结构: ![arch](songnet-network.png) ## Usage 本项目开源在文本生成项目:[textgen](https://github.com/shibing624/textgen),可支持SongNet模型,通过如下命令调用: Install package: ```shell pip install -U textgen ``` ```python import sys sys.path.append('..') from textgen.language_modeling import SongNetModel model = SongNetModel(model_type='songnet', model_name='shibing624/songnet-base-chinese-couplet') sentences = [ "严蕊<s1>如梦令<s2>道是梨花不是。</s>道是杏花不是。</s>白白与红红,别是东风情味。</s>曾记。</s>曾记。</s>人在武陵微醉。", "<s1><s2>一句相思吟岁月</s>千杯美酒醉风情", "<s1><s2>几树梅花数竿竹</s>一潭秋水半屏山" "<s1><s2>未舍东江开口咏</s>且施妙手点睛来", "<s1><s2>一去二三里</s>烟村四五家", ] print("inputs:", sentences) print("outputs:", model.generate(sentences)) sentences = [ "<s1><s2>一句____月</s>千杯美酒__情", "<s1><s2>一去二三里</s>烟村__家</s>亭台__座</s>八__枝花", ] print("inputs:", sentences) print("outputs:", model.fill_mask(sentences)) ``` output: ```shell inputs: ['严蕊<s1>如梦令<s2>道是梨花不是。</s>道是杏花不是。</s>白白与红红,别是东风情味。</s>曾记。</s>曾记。</s>人在武陵微醉。', '<s1><s2>一句相思吟岁月</s>千杯美酒醉风情', '<s1><s2>几树梅花数竿竹</s>一潭秋水半屏山<s1><s2>未舍东江开口咏</s>且施妙手点睛来', '<s1><s2>一去二三里</s>烟村四五家'] outputs: ['<bos>盛世欣开新气象</s>春联喜绘大文章</s>春天铺锦笺,宏图更写好山山</s>新篇章</s>新篇章</s>神州高唱好年华</s>', '<bos>一曲琴音添雅韵</s>几回酒醉解愁思</s>', '<bos>三分天下隆中对</s>四面八方九派江山笔底留</s>', '<bos>春深花已老</s>夜静露方浓</s>'] inputs: ['<s1><s2>一句____月</s>千杯美酒__情', '<s1><s2>一去二三里</s>烟村__家</s>亭台__座</s>八__枝花'] outputs: ['<bos>一句佳诗吟盛月</s>千杯美酒祝春情</s>', '<bos>一去二三里</s>烟村百二家</s>亭台十二座</s>八里一枝花</s>'] ``` 模型文件组成: ``` songnet-base-chinese-couplet ├── pytorch_model.bin └── vocab.txt ``` ### 训练数据集 #### 中文对联数据集 - 数据:[对联github](https://github.com/wb14123/couplet-dataset)、[清洗过的对联github](https://github.com/v-zich/couplet-clean-dataset) - 相关内容 - [Huggingface](https://huggingface.co/) - [SongNet paper](https://aclanthology.org/2020.acl-main.68/) - [textgen](https://github.com/shibing624/textgen) 数据格式: ```text head -n 1 couplet_files/couplet/train/in.txt 晚 风 摇 树 树 还 挺 head -n 1 couplet_files/couplet/train/out.txt 晨 露 润 花 花 更 红 ``` 如果需要训练SongNet模型,请参考[https://github.com/shibing624/textgen/blob/main/examples/language_generation/training_zh_songnet_demo.py](https://github.com/shibing624/textgen/blob/main/examples/language_generation/training_zh_songnet_demo.py) ## Citation ```latex @software{textgen, author = {Xu Ming}, title = {textgen: Implementation of Text Generation models}, year = {2022}, url = {https://github.com/shibing624/textgen}, } ```
16272737058c8a7466fbdf0fc688ec0b
Mingyi/classify_title_subject
Mingyi
bert
8
3
transformers
1
text-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
4,464
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tmp6tsjsfbf This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0178 - Train Sparse Categorical Accuracy: 0.9962 - Epoch: 49 ## Model description This model classifies the title of a content (e.g., YouTube video, article, or podcast episode) into 1 of 8 subjects 0. art 1. personal development 2. world 3. health 4. science 5. business 6. humanities 7. technology. This model is used to support [Sanderling](https://sanderling.app) ## Intended uses & limitations More information needed ## Training and evaluation data We used 1.5k labeled titles to train the model. Majority of the training dataset are English titles. The rest are Chinese titles. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:-----:| | 1.8005 | 0.3956 | 0 | | 1.3302 | 0.5916 | 1 | | 0.8998 | 0.7575 | 2 | | 0.6268 | 0.8468 | 3 | | 0.4239 | 0.9062 | 4 | | 0.2982 | 0.9414 | 5 | | 0.2245 | 0.9625 | 6 | | 0.1678 | 0.9730 | 7 | | 0.1399 | 0.9745 | 8 | | 0.1059 | 0.9827 | 9 | | 0.0822 | 0.9850 | 10 | | 0.0601 | 0.9902 | 11 | | 0.0481 | 0.9932 | 12 | | 0.0386 | 0.9955 | 13 | | 0.0292 | 0.9977 | 14 | | 0.0353 | 0.9940 | 15 | | 0.0336 | 0.9932 | 16 | | 0.0345 | 0.9910 | 17 | | 0.0179 | 0.9985 | 18 | | 0.0150 | 0.9985 | 19 | | 0.0365 | 0.9895 | 20 | | 0.0431 | 0.9895 | 21 | | 0.0243 | 0.9955 | 22 | | 0.0317 | 0.9925 | 23 | | 0.0375 | 0.9902 | 24 | | 0.0138 | 0.9970 | 25 | | 0.0159 | 0.9977 | 26 | | 0.0160 | 0.9962 | 27 | | 0.0151 | 0.9977 | 28 | | 0.0337 | 0.9902 | 29 | | 0.0119 | 0.9977 | 30 | | 0.0165 | 0.9955 | 31 | | 0.0133 | 0.9977 | 32 | | 0.0047 | 1.0 | 33 | | 0.0037 | 1.0 | 34 | | 0.0033 | 1.0 | 35 | | 0.0031 | 1.0 | 36 | | 0.0036 | 1.0 | 37 | | 0.0343 | 0.9887 | 38 | | 0.0234 | 0.9962 | 39 | | 0.0034 | 1.0 | 40 | | 0.0036 | 1.0 | 41 | | 0.0261 | 0.9917 | 42 | | 0.0111 | 0.9970 | 43 | | 0.0039 | 1.0 | 44 | | 0.0214 | 0.9932 | 45 | | 0.0044 | 0.9985 | 46 | | 0.0122 | 0.9985 | 47 | | 0.0119 | 0.9962 | 48 | | 0.0178 | 0.9962 | 49 | ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Tokenizers 0.10.3
71b77fa94b584f9c5ae4f56106e878b9
Helsinki-NLP/opus-mt-et-sv
Helsinki-NLP
marian
10
21
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-et-sv * source languages: et * target languages: sv * OPUS readme: [et-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/et-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/et-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/et-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.et.sv | 28.9 | 0.513 |
f864de9e1458f46942e1225c9e598512
espnet/akreal_swbd_da_hubert_conformer
espnet
null
23
0
espnet
0
automatic-speech-recognition
false
false
false
cc-by-4.0
['en']
['swbd_da']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'automatic-speech-recognition']
false
true
true
7,160
false
## ESPnet2 ASR model ### `akreal/espnet2_swbd_da_hubert_conformer` This model was trained by Pavel Denisov using swbd_da recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 08c6efbc6299c972301236625f9abafe087c9f9c pip install -e . cd egs2/swbd_da/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/akreal_swbd_da_hubert_conformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Thu Jan 20 19:31:21 CET 2022` - python version: `3.8.12 (default, Aug 30 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.10.1+cu113` - Git hash: `08c6efbc6299c972301236625f9abafe087c9f9c` - Commit date: `Tue Jan 4 13:40:33 2022 +0100` ## asr_train_asr_raw_en_word_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.loss.ave/test_context3|2379|2379|66.3|33.7|0.0|0.0|33.7|33.7| |decode_asr_asr_model_valid.loss.ave/valid_context3|8116|8116|69.5|30.5|0.0|0.0|30.5|30.5| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.loss.ave/test_context3|2379|19440|76.1|17.7|6.2|8.1|32.0|33.7| |decode_asr_asr_model_valid.loss.ave/valid_context3|8116|66353|79.5|16.1|4.4|8.0|28.5|30.5| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer_hubert_context3.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_hubert_context3_raw_en_word_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 35 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 7 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: - frontend.upstream num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 4000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_context3_raw_en_word_sp/train/speech_shape - exp/asr_stats_context3_raw_en_word_sp/train/text_shape.word valid_shape_file: - exp/asr_stats_context3_raw_en_word_sp/valid/speech_shape - exp/asr_stats_context3_raw_en_word_sp/valid/text_shape.word batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_context3_sp/wav.scp - speech - sound - - dump/raw/train_context3_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/valid_context3/wav.scp - speech - sound - - dump/raw/valid_context3/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0001 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - statement - backchannel - opinion - abandon - agree - yn_q - apprec - 'yes' - uninterp - close - wh_q - acknowledge - 'no' - yn_decl_q - hedge - backchannel_q - sum - quote - affirm - other - directive - repeat - open_q - completion - rhet_q - hold - reject - answer - neg - ans_dispref - repeat_q - open - or - commit - maybe - decl_q - third_pty - self_talk - thank - apology - tag_q - downplay - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.0 extract_feats_in_collect_stats: false use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: hubert_large_ll60k download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.5a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
59f7ac2aeb2bc441917397fc31a1977f
Helsinki-NLP/opus-mt-ru-ar
Helsinki-NLP
marian
11
28
transformers
0
translation
true
true
false
apache-2.0
['ru', 'ar']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,161
false
### rus-ara * source group: Russian * target group: Arabic * OPUS readme: [rus-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ara/README.md) * model: transformer * source language(s): rus * target language(s): apc ara arz * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.rus.ara | 16.6 | 0.486 | ### System Info: - hf_name: rus-ara - source_languages: rus - target_languages: ara - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-ara/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ru', 'ar'] - src_constituents: {'rus'} - tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-ara/opus-2020-07-03.test.txt - src_alpha3: rus - tgt_alpha3: ara - short_pair: ru-ar - chrF2_score: 0.486 - bleu: 16.6 - brevity_penalty: 0.9690000000000001 - ref_len: 18878.0 - src_name: Russian - tgt_name: Arabic - train_date: 2020-07-03 - src_alpha2: ru - tgt_alpha2: ar - prefer_old: False - long_pair: rus-ara - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
9ae719492130817a460b8462d335d869
theojolliffe/bart-paraphrase-v4-e1-feedback-feedback-e1
theojolliffe
bart
12
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,295
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-paraphrase-v4-e1-feedback-feedback-e1 This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1-feedback](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1-feedback) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 34 | 2.9415 | 60.8992 | 38.9444 | 51.1386 | 52.0048 | 19.75 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0 - Datasets 1.18.0 - Tokenizers 0.10.3
5a173aa976927657636bd78f2c864659
darkprincess638/HD-Wallpaper-Nature
darkprincess638
null
4
0
null
1
null
false
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
699
false
## Important This model generates 16:9 higher-detail wallpaper images. You **cannot use 512x512**, you **must use a minimum of 1024x576 or higher**. High-res fix is reccomended if going higher but using 1024x576 as the basis. ## Details * Begin prompt with: `hd-wallpaper nature` to activate this model * Use a minimum size of: 1024x576 * You can add keywords like: `forest`, `mountain`, `cliff`, `hill`, `beach`, etc... ## Images made with this model These were generated with high-res fix to start at 1024x576 with a 1.25 or 25% increase resulting in 1280x720 ![River](https://i.imgur.com/ElvQi1F.png) ![Ocean](https://i.imgur.com/syzJQMk.png) ![Snow Forest](https://i.imgur.com/kDNsRaT.png)
e52540197118614340ffd7fd8ea79554
lmqg/mbart-large-cc25-frquad-qg-ae
lmqg
mbart
20
2
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['fr']
['lmqg/qg_frquad']
null
0
0
0
0
0
0
0
['question generation', 'answer extraction']
true
true
true
7,671
false
# Model Card of `lmqg/mbart-large-cc25-frquad-qg-ae` This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation and answer extraction jointly on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) - **Language:** fr - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="fr", model="lmqg/mbart-large-cc25-frquad-qg-ae") # model prediction question_answer_pairs = model.generate_qa("Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-frquad-qg-ae") # answer extraction answer = pipe("generate question: Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême Berger <hl> » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.") # question generation question = pipe("extract answers: Pourtant, la strophe spensérienne, utilisée cinq fois avant que ne commence le chœur, constitue en soi un vecteur dont les répétitions structurelles, selon Ricks, relèvent du pur lyrisme tout en constituant une menace potentielle. Après les huit sages pentamètres iambiques, l'alexandrin final <hl> permet une pause <hl>, « véritable illusion d'optique » qu'accentuent les nombreuses expressions archaïsantes telles que did swoon, did seem, did go, did receive, did make, qui doublent le prétérit en un temps composé et paraissent à la fois « très précautionneuses et très peu pressées ».") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 72.56 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_1 | 16.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_2 | 4.88 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_3 | 1.85 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_4 | 0.91 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | METEOR | 8.56 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | MoverScore | 50.46 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | ROUGE_L | 18.54 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 77.72 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedF1Score (MoverScore) | 51.65 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedPrecision (BERTScore) | 76.9 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedPrecision (MoverScore) | 51.15 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedRecall (BERTScore) | 78.58 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedRecall (MoverScore) | 52.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | AnswerF1Score | 3.66 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | BERTScore | 58.41 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_1 | 2.56 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_2 | 0.76 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_3 | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_4 | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | METEOR | 3.24 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | MoverScore | 45.72 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | ROUGE_L | 3.48 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_frquad - dataset_name: default - input_types: ['paragraph_answer', 'paragraph_sentence'] - output_types: ['question', 'answer'] - prefix_types: ['qg', 'ae'] - model: facebook/mbart-large-cc25 - max_length: 512 - max_length_output: 32 - epoch: 5 - batch: 2 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 32 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qg-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
c5fa38964ae6af9c3b5e9bdc3f0f39c3
tftgregrge/evgengrcivface
tftgregrge
null
18
2
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
619
false
### evgengrcivface Dreambooth model trained by tftgregrge with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
0a986e7fc25298e5c79924f74a29ff75
yanaiela/roberta-base-epoch_7
yanaiela
roberta
9
2
transformers
0
fill-mask
true
false
false
mit
['en']
['wikipedia', 'bookcorpus']
null
0
0
0
0
0
0
0
['roberta-base', 'roberta-base-epoch_7']
false
true
true
2,100
false
# RoBERTa, Intermediate Checkpoint - Epoch 7 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_7. ## Model Description This model was captured during a reproduction of [RoBERTa-base](https://huggingface.co/roberta-base), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM). The intended uses, limitations, training data and training procedure for the fully trained model are similar to [RoBERTa-base](https://huggingface.co/roberta-base). Two major differences with the original model: * We trained our model for 100K steps, instead of 500K * We only use Wikipedia and the Book Corpus, as corpora which are publicly available. ### How to use Using code from [RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on PyTorch: ``` from transformers import pipeline model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10) model("Hello, I'm the <mask> RoBERTa-base language model") ``` ## Citation info ```bibtex @article{2207.14251, Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg}, Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions}, Year = {2022}, Eprint = {arXiv:2207.14251}, } ```
9a060b5a432ea6983f5511b606a232d5
laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft
laion
null
10
1
open_clip
0
zero-shot-image-classification
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
['zero-shot-image-classification', 'clip']
false
true
true
11,371
false
# Model card for CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) # Model Details ## Model Description A series of CLIP [ConvNeXt-Large](https://arxiv.org/abs/2201.03545) (w/ extra text depth, vision MLP head) models trained on the LAION-2B (english) subset of [LAION-5B](https://arxiv.org/abs/2210.08402) using [OpenCLIP](https://github.com/mlfoundations/open_clip). The models utilize: * the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Large model (`convnext_large`) as the image tower * a MLP (`fc - gelu - drop - fc`) head in vision tower instead of the single projection of other CLIP models * a text tower with same width but 4 layers more depth than ViT-L / RN50x16 models (depth 16, embed dim 768). This 320x320 resolution model is a fine-tune of [CLIP-convnext_large_d.laion2B-s26B-b102K-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) at a higher resolution. It was fine-tune from the final checkpoint of the original 256x256 training run w/ an additional ~2.5B samples and a lower learning rate. At 320x320, the ConvNext-Large-D is significantly more efficient than the L/14 model at 336x336 that OpenAI fine-tuned. L/14-336 model is 2.5x more GMAC, 2.8x more activations, and 1.22x more parameters. | Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) | | ----- | ------- | ---------- | ------------ | --------- | | [convnext_large_d.laion2b_s26b_b102k-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1), D(0.1) | 75.9 | | [convnext_large_d_320.laion2b_s29b_b131k-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.6 | | [convnext_large_d_320.laion2b_s29b_b131k-ft-soup](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.9 | RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only, D = Dropout (prob) -- image tower head only LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering. Model training done by Ross Wightman on the [stability.ai](https://stability.ai/) cluster. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with one of (see table in intro): * LAION-2B - A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). * LAION-Aesthetic - A 900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure All 320x320 model fine-tunes were trained with a global batch size of 131072 for 10-16 checkpoint intervals of 203.7M samples for a total of ~2-3B samples seen over fine-tune. For 320x320 models, a slurm script w/ srun below was used on 64 8-GPU (A100 40GB) nodes (Stability). ``` /opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \ --save-frequency 1 \ --name "convnext_large_320" \ --pretrained ""/runs/convnext_large_256/epoch_128.pt" \ --resume 'latest' \ --train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \ --train-num-samples 203666042 \ --dataset-type webdataset \ --precision amp_bfloat16 \ --beta2 0.98 \ --warmup 2000 \ --batch-size=256 \ --epochs=12 \ --dataset-resampled \ --aug-cfg use_timm=True scale='(0.5, 1.0)' re_prob=0.4 \ --clip-grad-norm 5.0 \ --lr 5e-5 \ --workers=6 \ --model "convnext_large_d_320" \ --seed 0 \ --ddp-static-graph \ --local-loss \ --gather-with-grad \ --grad-checkpointing ``` # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. ## Results The models achieve between 75.9 and 76.9 top-1 zero-shot accuracy on ImageNet-1k. Zero-shot curve of origina from-scratch 256x256 training: ![](convnext_large_zero_shot.png) An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for compute used to train this model. # Citation **BibTeX:** LAION-5B ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenCLIP software ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` OpenAI CLIP paper ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @Article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
07f32bfa621fd4ae235aded70e82e7c0
sijunhe/nezha-large-wwm
sijunhe
nezha
7
117
transformers
0
fill-mask
true
false
false
afl-3.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
761
false
**Please use 'Bert' related tokenizer classes and 'Nezha' related model classes** [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. The original checkpoints can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-PyTorch) ## Example Usage ``` from transformers import BertTokenizer, NezhaModel tokenizer = BertTokenizer.from_pretrained("sijunhe/nezha-large-wwm") model = NezhaModel.from_pretrained("sijunhe/nezha-large-wwm") text = "我爱北京天安门" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
b9494e50516eea535ec39bca433812ed
GreenCarpetCleaningGarland/GreenCarpetCleaningGarland
GreenCarpetCleaningGarland
null
2
0
null
0
null
false
false
false
other
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
784
false
Green Carpet Cleaning Garland http://garlandcarpetcleaner.com/ (972) 256-8544 One of methods we follow at cover cleaning is "Steam Cleaning Administration" that depends on utilizing minimal high temp water and more steam, centering steam - which infiltrating into profound on spots and stain to dissolve every one of them even the hardest ones and kill all poisons from your rug. Then, at that point, the job of our compelling green items starts to clear this large number of components, returning your floor covering shimmered and bright. At last, we utilize our excellent dry machines, so your rug will be full dry inside no time. We have specific floor covering steam cleaners, so they know how to follow the high amazing skill simultaneously, safeguarding your rug from any harms.
cb87a885dddca87995091bb4511296b3
efederici/it5-efficient-small-lfqa
efederici
t5
13
26
transformers
1
text2text-generation
true
false
false
apache-2.0
['it']
['custom']
null
0
0
0
0
0
0
0
[]
false
true
true
3,403
false
# it5-efficient-small-lfqa It is a T5 ([IT5](https://huggingface.co/stefan-it/it5-efficient-small-el32)) efficient small model trained on a lfqa dataset. <p align="center"> <img src="https://www.marcorossiartecontemporanea.net/wp-content/uploads/2021/04/MARCTM0413-9CFBn1gs-scaled.jpg" width="400"> </br> Mirco Marchelli, Voce in capitolo, 2019 </p> ## Training Data This model was trained on a lfqa dataset. The model provides long-form answers to open domain questions. ## Usage and Performance ```python import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("efederici/it5-efficient-small-lfqa") model = AutoModelForSeq2SeqLM.from_pretrained("efederici/it5-efficient-small-lfqa") query = "con chi si era messo in contatto elon musk?" # concatenated texts/document text doc = """ La notizia dell’acquisizione da parte di Elon Musk del 9,2 per cento delle azioni di Twitter e del suo successivo ingresso nel consiglio di amministrazione della società hanno attirato grandi attenzioni, non solo da parte degli analisti finanziari, ma anche di chi si occupa di social media e del modo in cui viene impiegata la piattaforma da centinaia di milioni di persone in tutto il mondo. Musk, che ha un grande seguito su Twitter, in passato aveva più volte criticato il social network, accusandolo di non tutelare a sufficienza le libertà di espressione, anche in casi limite come l’assalto al Congresso degli Stati Uniti del 2021. Alcune settimane fa, Musk si era messo in contatto con Parag Agrawal, CEO di Twitter da fine novembre 2021, e con il suo predecessore e cofondatore della società, Jack Dorsey, annunciando di avere avviato l’acquisizione di alcune quote dell’azienda e di essere disponibile per discutere di soluzioni per migliorarla. Secondo fonti del New York Times, dopo i primi contatti, Agrawal aveva proposto a Musk di avere un ruolo più attivo oltre a quello di azionista, offrendogli la possibilità di entrare nel consiglio di amministrazione. """ query_and_docs = f"Domanda: {query} Contesto: {doc}" model_input = tokenizer(query_and_docs, truncation=True, padding=True, return_tensors="pt") output = model.generate(input_ids=model_input["input_ids"], attention_mask=model_input["attention_mask"], min_length=10, max_length=256, do_sample=False, early_stopping=True, num_beams=8, temperature=1.0, top_k=None, top_p=None, no_repeat_ngram_size=3, num_return_sequences=1) tokenizer.batch_decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) ``` The model will predict: 'Elon Musk si era messo in contatto con Parag Agrawal, CEO di Twitter da fine novembre 2021 e con il suo predecessore e cofondatore della società, Jack Dorsey, annunciando di avere avviato l’acquisizione di alcune quote dell’azienda e di essere disponibile per discutere soluzioni per migliorarla.'
3a55951ce65c53577a19512cabb8b090
bousejin/xlm-roberta-base-finetuned-panx-it
bousejin
xlm-roberta
9
5
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2562 - F1: 0.8223 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8175 | 1.0 | 70 | 0.3331 | 0.7147 | | 0.2807 | 2.0 | 140 | 0.2745 | 0.8045 | | 0.1836 | 3.0 | 210 | 0.2562 | 0.8223 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
67ce6287fbe6ccf346a1ab57466935aa
NeuroSenko/cab_senko_by_rimukoro_hyper
NeuroSenko
null
36
0
null
0
null
false
false
false
mit
null
['NeuroSenko/senko_by_rimukoro']
null
0
0
0
0
0
0
0
['Stable Diffusion', 'Senko', 'Hypernetwork']
false
true
true
951
false
## Description This hypernetwork will help you to make your Senko-san be look like she was drawn by Rimukoro. This model was trained using [any222trinart](https://huggingface.co/MindB1ast/any222trinart/blob/main/any222trinart.ckpt) model (also known as Cabbage Mix) so it should work fine with that specific model or with other relative models. ## Usage For using this hypernetwork just place .pt file in your `models\hypernetworks` directory and then depends on your UI you will need to choose this hypernetwork in settings or use it directly in your positive prompt like `<hypernet:cab_senko_by_rimukoro_4000:1.0>`. Make sure that `cab_senko_by_rimukoro_4000` fits to your actual filename, numbers at the end can be different depends on a model you download. ## Dataset Feel free to use a dataset I used for training this model. You can find it [here](https://huggingface.co/datasets/NeuroSenko/senko_by_rimukoro). ## TODO 1. Add example images
4a756ef295fb942e7be0ba9cb3d224b1
keras-io/deep-dream
keras-io
null
6
26
keras
0
null
false
false
false
['cc0-1.0']
null
null
null
0
0
0
0
0
0
0
['gan', 'generative adversarial networks', 'deep dream']
false
true
true
1,311
false
## Keras Implementation of Deep Dream 🦚🌌 This repo contains the model and the notebook [for this Deep Dream implementation of Keras](https://keras.io/examples/generative/deep_dream/). Full credits to: [François Chollet](https://twitter.com/fchollet) ![deepdream](https://keras.io/img/examples/generative/deep_dream/deep_dream_17_0.png) ## Background Information "Deep dream" is an image-filtering technique which consists of taking an image classification model, and running gradient ascent over an input image to try to maximize the activations of specific layers (and sometimes, specific units in specific layers) for this input. It produces hallucination-like visuals. It was first introduced by Alexander Mordvintsev from Google in July 2015. Process: - Load the original image. - Define a number of processing scales ("octaves"), from smallest to largest. - Resize the original image to the smallest scale. - For every scale, starting with the smallest (i.e. current one): - Run gradient ascent - Upscale image to the next scale - Re-inject the detail that was lost at upscaling time - Stop when we are back to the original size. To obtain the detail lost during upscaling, we simply take the original image, shrink it down, upscale it, and compare the result to the (resized) original image.
6b6b4f34ae673131679bb9b651f704ab
sd-concepts-library/ricar
sd-concepts-library
null
10
0
null
1
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,082
false
### ricar on Stable Diffusion This is the `<ricard>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<ricard> 0](https://huggingface.co/sd-concepts-library/ricar/resolve/main/concept_images/2.jpeg) ![<ricard> 1](https://huggingface.co/sd-concepts-library/ricar/resolve/main/concept_images/3.jpeg) ![<ricard> 2](https://huggingface.co/sd-concepts-library/ricar/resolve/main/concept_images/1.jpeg) ![<ricard> 3](https://huggingface.co/sd-concepts-library/ricar/resolve/main/concept_images/4.jpeg) ![<ricard> 4](https://huggingface.co/sd-concepts-library/ricar/resolve/main/concept_images/0.jpeg)
90f22426f383dc9d761c6dc5d631057d
aXhyra/presentation_emotion_31415
aXhyra
distilbert
10
5
transformers
0
text-classification
true
false
false
apache-2.0
null
['tweet_eval']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,399
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_emotion_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.1243 - F1: 0.7149 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.18796906442746e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 31415 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.73 | 1.0 | 408 | 0.8206 | 0.6491 | | 0.3868 | 2.0 | 816 | 0.7733 | 0.7230 | | 0.0639 | 3.0 | 1224 | 0.9962 | 0.7101 | | 0.0507 | 4.0 | 1632 | 1.1243 | 0.7149 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
520a64e1dfe6d3465fdaa3f6e9c162bf
Helsinki-NLP/opus-mt-lus-fr
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-lus-fr * source languages: lus * target languages: fr * OPUS readme: [lus-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lus-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lus-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lus.fr | 25.5 | 0.423 |
44bedbc2cc5992a840e254c888dac56a
ScottaStrong/DialogGPT-small-Scott
ScottaStrong
gpt2
9
7
transformers
0
conversational
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['conversational']
false
true
true
1,732
false
# DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-small) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("scottastrong/DialogGPT-small-Scott") model = AutoModelWithLMHead.from_pretrained("scottastrong/DialogGPT-small-Scott") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
28c96f14b7e7d527a9c86e6f17bc52c3
infinitejoy/wav2vec2-large-xls-r-300m-galician
infinitejoy
wav2vec2
18
8
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['gl']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'generated_from_trainer', 'gl', 'hf-asr-leaderboard', 'model_for_talk', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event']
true
true
true
1,524
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-galician This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GL dataset. It achieves the following results on the evaluation set: - Loss: 0.1525 - Wer: 0.1542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0067 | 4.35 | 500 | 2.9632 | 1.0 | | 1.4939 | 8.7 | 1000 | 0.5005 | 0.4157 | | 0.9982 | 13.04 | 1500 | 0.1967 | 0.1857 | | 0.8726 | 17.39 | 2000 | 0.1587 | 0.1564 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
a767722be8c42d41822b5216a1a338b8
avojarot/duolingo-owl
avojarot
null
22
15
diffusers
2
text-to-image
true
false
false
creativeml-openrail-m
null
null
null
1
0
1
0
0
0
0
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
true
true
905
false
# DreamBooth model for the duolingo concept trained by avojarot on the avojarot/duolingo_owl dataset. This is a Stable Diffusion model fine-tuned on the duolingo concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of duolingo owl** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on duolingo `owl` images for the wildcard theme. ## Images generated by model Cute ![Image alt text](cute.PNG) Realistic ![Image alt text](4.PNG) Pizza ![Image alt text](3.png) Some others ![Image alt text](2.png) ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('avojarot/duolingo-owl') image = pipeline().images[0] image ```
81d7a57fd7b14b7389b2d37a957b797c
Buseak/model_from_berturk_upos_22Jan
Buseak
bert
12
16
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,659
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_from_berturk_upos_22Jan This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0130 - Precision: 0.9961 - Recall: 0.9953 - F1: 0.9957 - Accuracy: 0.9967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 170 | 0.2285 | 0.9100 | 0.9079 | 0.9089 | 0.9352 | | No log | 2.0 | 340 | 0.1694 | 0.9303 | 0.9316 | 0.9310 | 0.9510 | | 0.3728 | 3.0 | 510 | 0.1396 | 0.9420 | 0.9420 | 0.9420 | 0.9589 | | 0.3728 | 4.0 | 680 | 0.1114 | 0.9561 | 0.9547 | 0.9554 | 0.9681 | | 0.3728 | 5.0 | 850 | 0.0880 | 0.9657 | 0.9653 | 0.9655 | 0.9754 | | 0.1318 | 6.0 | 1020 | 0.0670 | 0.9752 | 0.9730 | 0.9741 | 0.9814 | | 0.1318 | 7.0 | 1190 | 0.0535 | 0.9801 | 0.9790 | 0.9795 | 0.9850 | | 0.1318 | 8.0 | 1360 | 0.0407 | 0.9867 | 0.9842 | 0.9854 | 0.9894 | | 0.0764 | 9.0 | 1530 | 0.0312 | 0.9902 | 0.9881 | 0.9892 | 0.9920 | | 0.0764 | 10.0 | 1700 | 0.0254 | 0.9919 | 0.9903 | 0.9911 | 0.9935 | | 0.0764 | 11.0 | 1870 | 0.0203 | 0.9939 | 0.9925 | 0.9932 | 0.9948 | | 0.0466 | 12.0 | 2040 | 0.0169 | 0.9949 | 0.9939 | 0.9944 | 0.9958 | | 0.0466 | 13.0 | 2210 | 0.0148 | 0.9955 | 0.9947 | 0.9951 | 0.9961 | | 0.0466 | 14.0 | 2380 | 0.0135 | 0.9959 | 0.9949 | 0.9954 | 0.9966 | | 0.0329 | 15.0 | 2550 | 0.0130 | 0.9961 | 0.9953 | 0.9957 | 0.9967 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
7952873f3519884c046e49d83f902a36
anmol-chawla/animecharacters
anmol-chawla
null
15
47
diffusers
1
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
622
false
### animecharacters Dreambooth model trained by anmol-chawla with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:
582b944fde3a7cffaec4832d2b118943
iis2009002/xlm-roberta-base-finetuned-panx-fr
iis2009002
xlm-roberta
10
7
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2867 - F1: 0.8355 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5817 | 1.0 | 191 | 0.3395 | 0.7854 | | 0.2617 | 2.0 | 382 | 0.2856 | 0.8278 | | 0.1708 | 3.0 | 573 | 0.2867 | 0.8355 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
4c9052c6eb8841af674690ca685ec4b5
SetFit/distilbert-base-uncased__sst2__train-16-1
SetFit
distilbert
10
10
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,323
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__sst2__train-16-1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6012 - Accuracy: 0.6766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6983 | 1.0 | 7 | 0.7036 | 0.2857 | | 0.6836 | 2.0 | 14 | 0.7181 | 0.2857 | | 0.645 | 3.0 | 21 | 0.7381 | 0.2857 | | 0.5902 | 4.0 | 28 | 0.7746 | 0.2857 | | 0.5799 | 5.0 | 35 | 0.7242 | 0.5714 | | 0.3584 | 6.0 | 42 | 0.6935 | 0.5714 | | 0.2596 | 7.0 | 49 | 0.7041 | 0.5714 | | 0.1815 | 8.0 | 56 | 0.5930 | 0.7143 | | 0.0827 | 9.0 | 63 | 0.6976 | 0.7143 | | 0.0613 | 10.0 | 70 | 0.7346 | 0.7143 | | 0.0356 | 11.0 | 77 | 0.6992 | 0.5714 | | 0.0158 | 12.0 | 84 | 0.7328 | 0.5714 | | 0.013 | 13.0 | 91 | 0.7819 | 0.5714 | | 0.0103 | 14.0 | 98 | 0.8589 | 0.5714 | | 0.0087 | 15.0 | 105 | 0.9177 | 0.5714 | | 0.0076 | 16.0 | 112 | 0.9519 | 0.5714 | | 0.0078 | 17.0 | 119 | 0.9556 | 0.5714 | | 0.006 | 18.0 | 126 | 0.9542 | 0.5714 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
9edb212bbbb8c1f0e959b30ff6d64a13
sebastian-hofstaetter/uni-colberter-128-1-msmarco
sebastian-hofstaetter
ColBERT
8
6
transformers
1
null
true
false
false
apache-2.0
['en']
['ms_marco']
null
0
0
0
0
0
0
0
['bag-of-words', 'dense-passage-retrieval', 'knowledge-distillation']
false
true
true
1,069
false
# Uni-ColBERTer (Dim: 1) for Passage Retrieval If you want to know more about our (Uni-)ColBERTer architecture check out our paper: https://arxiv.org/abs/2203.13088 🎉 For more information, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/colberter ## Limitations & Bias - The model is only trained on english text. - The model inherits social biases from both DistilBERT and MSMARCO. - The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text. ## Citation If you use our model checkpoint please cite our work as: ``` @article{Hofstaetter2022_colberter, author = {Sebastian Hofst{\"a}tter and Omar Khattab and Sophia Althammer and Mete Sertkan and Allan Hanbury}, title = {Introducing Neural Bag of Whole-Words with ColBERTer: Contextualized Late Interactions using Enhanced Reduction}, publisher = {arXiv}, url = {https://arxiv.org/abs/2203.13088}, doi = {10.48550/ARXIV.2203.13088}, year = {2022}, } ```
bf9068fb29d102de741cb01e9acabd64
himanshusrtekbox/distilbert-base-uncased-finetuned-cola
himanshusrtekbox
distilbert
10
1
transformers
0
text-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,608
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # himanshusrtekbox/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1911 - Validation Loss: 0.5605 - Train Matthews Correlation: 0.5106 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2670, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5185 | 0.4556 | 0.4728 | 0 | | 0.3247 | 0.4570 | 0.5093 | 1 | | 0.1911 | 0.5605 | 0.5106 | 2 | ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.0 - Datasets 2.2.1 - Tokenizers 0.12.1
13d93b4c8ed033414939d438643405c9
facebook/levit-128
facebook
levit
5
35
transformers
0
image-classification
true
false
false
apache-2.0
null
['imagenet-1k']
null
0
0
0
0
0
0
0
['vision', 'image-classification']
false
true
true
1,290
false
# LeViT LeViT-128 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference ](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT). Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Usage Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-128') model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-128') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ```
9efec88f2572c0b2316ca6ced7153110
bigmorning/whisper_wermet_0010
bigmorning
whisper
7
3
transformers
0
automatic-speech-recognition
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
2,543
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_wermet_0010 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5820 - Train Accuracy: 0.0305 - Train Wermet: 1.5323 - Validation Loss: 0.6980 - Validation Accuracy: 0.0305 - Validation Wermet: 1.1238 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch | |:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:| | 5.0795 | 0.0116 | 43.8776 | 4.4395 | 0.0122 | 35.4119 | 0 | | 4.3059 | 0.0131 | 29.7976 | 4.0311 | 0.0143 | 26.0070 | 1 | | 3.8871 | 0.0148 | 19.3999 | 3.6500 | 0.0158 | 19.2186 | 2 | | 3.0943 | 0.0184 | 18.3704 | 2.3327 | 0.0226 | 22.5034 | 3 | | 1.8954 | 0.0240 | 16.2471 | 1.4889 | 0.0266 | 14.2782 | 4 | | 1.2781 | 0.0269 | 8.4169 | 1.1273 | 0.0283 | 7.4581 | 5 | | 0.9797 | 0.0283 | 4.8739 | 0.9481 | 0.0292 | 3.9451 | 6 | | 0.8006 | 0.0293 | 2.7433 | 0.8371 | 0.0297 | 2.3065 | 7 | | 0.6764 | 0.0299 | 2.1646 | 0.7554 | 0.0301 | 1.3005 | 8 | | 0.5820 | 0.0305 | 1.5323 | 0.6980 | 0.0305 | 1.1238 | 9 | ### Framework versions - Transformers 4.25.0.dev0 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.2
ccb703c0fb7cb6b65c83ae7da5a472fe
nimrah/wav2vec2-large-xls-r-300m-turkish-colab-9
nimrah
wav2vec2
12
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,104
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab-9 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.03 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
0f94a36a759f05c8acd7ada19c56ec5a
jonatasgrosman/exp_w2v2r_fr_vp-100k_age_teens-0_sixties-10_s131
jonatasgrosman
wav2vec2
10
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['fr']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'fr']
false
true
true
498
false
# exp_w2v2r_fr_vp-100k_age_teens-0_sixties-10_s131 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
5bc4dd39485339ff7da35c74c0883073
thusken/nb-bert-base-ctr-regression
thusken
bert
10
5
transformers
0
text-classification
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,825
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nb-bert-base-ctr-regression This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0073 - Mse: 0.0073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Mse | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.0106 | 1.0 | 1103 | 0.0069 | 0.0069 | | 0.0073 | 2.0 | 2206 | 0.0072 | 0.0072 | | 0.0058 | 3.0 | 3309 | 0.0063 | 0.0063 | | 0.0038 | 4.0 | 4412 | 0.0073 | 0.0073 | | 0.0025 | 5.0 | 5515 | 0.0064 | 0.0064 | | 0.0019 | 6.0 | 6618 | 0.0065 | 0.0065 | | 0.0014 | 7.0 | 7721 | 0.0066 | 0.0066 | | 0.0011 | 8.0 | 8824 | 0.0067 | 0.0067 | | 0.0008 | 9.0 | 9927 | 0.0066 | 0.0066 | | 0.0007 | 10.0 | 11030 | 0.0066 | 0.0066 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 1.18.4 - Tokenizers 0.12.1
a2512e75b45b0dce48f61f7c33435086
inovex/multi2convai-logistics-de-logreg-ft
inovex
null
4
0
null
0
text-classification
false
false
false
mit
['de']
null
null
0
0
0
0
0
0
0
['text-classification']
false
true
true
3,130
false
# Multi2ConvAI-Logistics: German logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-logistics-de-logreg-ft >>> Create pipeline for config: multi2convai-logistics-de-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'logistics' and language 'de'. >>> >>> Enter your text (type 'stop' to end execution): Muss ich eine Maske tragen? >>> 'Wo kann ich das Paket ablegen?' was classified as 'details.safeplace' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "de" domain = "logistics" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Wo kann ich das Paket ablegen?") label >>> Label(string='details.safeplace', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/de curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.de.vec --output models/fasttext/de/wiki.de.vec python scripts/serialize_fasttext.py -r fasttext/wiki.de.vec -v fasttext/de/wiki.200k.de.vocab -e fasttext/de/wiki.200k.de.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: info@multi2conv.ai
c46907ad67e4a661f448358f2861d1c4
jonatasgrosman/exp_w2v2t_fr_vp-it_s878
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['fr']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'fr']
false
true
true
469
false
# exp_w2v2t_fr_vp-it_s878 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
c196e2bb8ec1ab00d7ebadec1cffc2ea
EdoAbati/distilbert-base-uncased-finetuned-news
EdoAbati
distilbert
10
4
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
924
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-news This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.22.0 - Pytorch 1.12.0 - Datasets 2.5.0 - Tokenizers 0.12.1
3eec2903091401e0d1f92018c03aa72b
mrizalf7/roberta-qa-finetuned-subjqa-movies_2-rizal
mrizalf7
roberta
11
9
transformers
0
question-answering
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
972
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-subjqa-movies_2-rizal This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
93277ec622fa2ab15de9aaacbc374eb0
jonatasgrosman/exp_w2v2r_fr_vp-100k_accent_france-0_belgium-10_s947
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['fr']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'fr']
false
true
true
502
false
# exp_w2v2r_fr_vp-100k_accent_france-0_belgium-10_s947 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
33f37f3354986551805ac8a3fa2d9219
anas-awadalla/bert-base-uncased-compacter-squad
anas-awadalla
null
21
0
null
0
null
false
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,042
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-compacter-squad This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 512 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
ebab8456dfffdf533983174fa54cbeee
Gladiator/distilbert-base-uncased_ner_wnut_17
Gladiator
distilbert
12
3
transformers
0
token-classification
true
false
false
apache-2.0
null
['wnut_17']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,729
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_ner_wnut_17 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.2400 - Precision: 0.6701 - Recall: 0.5467 - F1: 0.6021 - Accuracy: 0.9559 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2367 | 0.6879 | 0.4270 | 0.5269 | 0.9455 | | No log | 2.0 | 426 | 0.2272 | 0.6913 | 0.4928 | 0.5754 | 0.9533 | | 0.173 | 3.0 | 639 | 0.2393 | 0.6788 | 0.5132 | 0.5845 | 0.9553 | | 0.173 | 4.0 | 852 | 0.2338 | 0.6541 | 0.5610 | 0.6040 | 0.9557 | | 0.0489 | 5.0 | 1065 | 0.2400 | 0.6701 | 0.5467 | 0.6021 | 0.9559 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
129a8cd8c10423bfa07ce5f67e0ebc01
sd-concepts-library/flatic
sd-concepts-library
null
10
0
null
2
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,094
false
### flatic on Stable Diffusion This is the `<flat-ct>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<flat-ct> 0](https://huggingface.co/sd-concepts-library/flatic/resolve/main/concept_images/1.jpeg) ![<flat-ct> 1](https://huggingface.co/sd-concepts-library/flatic/resolve/main/concept_images/2.jpeg) ![<flat-ct> 2](https://huggingface.co/sd-concepts-library/flatic/resolve/main/concept_images/0.jpeg) ![<flat-ct> 3](https://huggingface.co/sd-concepts-library/flatic/resolve/main/concept_images/4.jpeg) ![<flat-ct> 4](https://huggingface.co/sd-concepts-library/flatic/resolve/main/concept_images/3.jpeg)
4fa484229e67190ea5fdf3b697bc6591
indridinn/distilbert-base-uncased-finetuned-ner
indridinn
distilbert
13
5
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,555
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0610 - Precision: 0.9275 - Recall: 0.9370 - F1: 0.9322 - Accuracy: 0.9836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2507 | 1.0 | 878 | 0.0714 | 0.9181 | 0.9243 | 0.9212 | 0.9813 | | 0.0516 | 2.0 | 1756 | 0.0617 | 0.9208 | 0.9325 | 0.9266 | 0.9828 | | 0.0306 | 3.0 | 2634 | 0.0610 | 0.9275 | 0.9370 | 0.9322 | 0.9836 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
183276fa2967b174a0284500703b9793
jonatasgrosman/exp_w2v2t_pl_vp-nl_s885
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pl']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'pl']
false
true
true
469
false
# exp_w2v2t_pl_vp-nl_s885 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
6765ea52de4a2decc1a2dfb323e6c1f2
FrostAura/gpt-neo-1.3B-fiction-novel-generation
FrostAura
gpt_neo
11
81
transformers
3
text-generation
true
false
true
mit
['en']
null
null
4
0
4
0
0
0
0
['text-generation', 'novel-generation', 'fiction', 'gpt-neo', 'pytorch']
false
true
true
1,337
false
<p align="center"> <img src="https://github.com/faGH/fa.creative/blob/master/Icons/FrostAura/FA%20Logo/FrostAura.Logo.Complex.png?raw=true" width="75" title="hover text"> </p> # fa.intelligence.models.generative.novels.fiction ## Description This FrostAura Intelligence model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) for fictional text content generation. ## Getting Started ### PIP Installation ``` pip install -U --no-cache-dir transformers ``` ### Usage ``` from transformers import pipeline model_name: str = 'FrostAura/gpt-neo-1.3B-fiction-novel-generation' generator: pipeline = pipeline('text-generation', model=model_name) prompt: str = 'So far my day has been ' gen_text: str = generator(prompt, do_sample=True, min_length=50) print(f'Result: {gen_text}') ``` ## Further Fine-Tuning [in development](https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/blob/main/gpt_neo.py) ## Support If you enjoy FrostAura open-source content and would like to support us in continuous delivery, please consider a donation via a platform of your choice. | Supported Platforms | Link | | ------------------- | ---- | | PayPal | [Donate via Paypal](https://www.paypal.com/donate/?hosted_button_id=SVEXJC9HFBJ72) | For any queries, contact dean.martin@frostaura.net.
2d21e47d8eb5ec2a1cb72d9b8343cd7e
Helsinki-NLP/opus-mt-en-lu
Helsinki-NLP
marian
10
16
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-en-lu * source languages: en * target languages: lu * OPUS readme: [en-lu](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-lu/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-lu/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.lu | 34.1 | 0.564 |
c1756193bc4cce6230c41a682b1340cf
d0r1h/led-base-ilc
d0r1h
led
15
7
transformers
0
summarization
true
false
false
apache-2.0
null
['ilc']
null
0
0
0
0
0
0
0
['summarization']
false
true
true
2,033
false
# Longformer Encoder-Decoder (LED) fine-tuned on ILC This model is a fine-tuned version of [led-base-16384](https://huggingface.co/allenai/led-base-16384) on the [ILC](https://huggingface.co/datasets/d0r1h/ILC) dataset. As described in [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf) by Iz Beltagy, Matthew E. Peters, Arman Cohan, *led-base-16384* was initialized from [*bart-base*](https://huggingface.co/facebook/bart-base) since both models share the exact same architecture. To be able to process 16K tokens, *bart-base*'s position embedding matrix was simply copied 16 times. ```Python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer device = "cuda" if torch.cuda.is_available() else "CPU" checkpoint = "d0r1h/led-base-ilc" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, return_dict_in_generate=True).to(device) case = "......." input_ids = tokenizer(case, return_tensors="pt").input_ids.to(device) global_attention_mask = torch.zeros_like(input_ids) global_attention_mask[:, 0] = 1 sequences = model.generate(input_ids, global_attention_mask=global_attention_mask).sequences summary = tokenizer.batch_decode(sequences, skip_special_tokens=True) ``` ## Evaluation results When the model is used for summarizing ILC documents(10 samples), it achieves the following results: | Model | rouge1-f | rouge1-p | rouge2-f | rouge2-p | rougeL-f | rougeL-p | |:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:| | led-ilc | **42** | **47** | **22** | **24** | **39** | **44** | | led-base | 3 | 39 | 1 | 21 | 3 | 37 | [This notebook](https://colab.research.google.com/github/d0r1h/Notebooks/blob/main/NLP/Summarization/led_base_ilc_summarization.ipynb) shows how *led* can effectively be used for downstream tasks such as summarization.
20752a594b9a3094846745ab0dbfd25e
gokuls/bert-base-uncased-mnli
gokuls
bert
17
72
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,748
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-mnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4218 - Accuracy: 0.8488 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5194 | 1.0 | 3068 | 0.4468 | 0.8307 | | 0.3445 | 2.0 | 6136 | 0.4384 | 0.8428 | | 0.2341 | 3.0 | 9204 | 0.4946 | 0.8415 | | 0.1625 | 4.0 | 12272 | 0.5479 | 0.8388 | | 0.1218 | 5.0 | 15340 | 0.6348 | 0.8358 | | 0.0968 | 6.0 | 18408 | 0.6620 | 0.8315 | | 0.0799 | 7.0 | 21476 | 0.7072 | 0.8287 | | 0.0675 | 8.0 | 24544 | 0.7659 | 0.8307 | | 0.0592 | 9.0 | 27612 | 0.7978 | 0.8305 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
075b690afe078dc5a3f2a34bd2a2886c
joaogante/test_img
joaogante
vit
6
1
transformers
1
feature-extraction
true
false
true
apache-2.0
null
['imagenet-21k']
null
1
1
0
0
0
0
0
['vision']
false
true
true
5,347
false
# Vision Transformer (base-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification). By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import ViTFeatureExtractor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k') model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` Here is how to use this model in JAX/Flax: ```python from transformers import ViTFeatureExtractor, FlaxViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k') model = FlaxViTModel.from_pretrained('google/vit-base-patch16-224-in21k') inputs = feature_extractor(images=image, return_tensors="np") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
a448c47e13e8bcd20fb601afca20bb16
microsoft/deberta-v3-xsmall
microsoft
deberta-v2
9
37,499
transformers
12
fill-mask
true
true
false
mit
['en']
null
null
1
0
1
0
1
1
0
['deberta', 'deberta-v3', 'fill-mask']
false
true
true
3,580
false
## DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543). Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates. The DeBERTa V3 xsmall model comes with 12 layers and a hidden size of 384. It has only **22M** backbone parameters with a vocabulary containing 128K tokens which introduces 48M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2. #### Fine-tuning on NLU tasks We present the dev results on SQuAD 2.0 and MNLI tasks. | Model |Vocabulary(K)|Backbone #Params(M)| SQuAD 2.0(F1/EM) | MNLI-m/mm(ACC)| |-------------------|----------|-------------------|-----------|----------| | RoBERTa-base |50 |86 | 83.7/80.5 | 87.6/- | | XLNet-base |32 |92 | -/80.2 | 86.8/- | | ELECTRA-base |30 |86 | -/80.5 | 88.8/ | | DeBERTa-base |50 |100 | 86.2/83.1| 88.8/88.5| | DeBERTa-v3-large|128|304 | 91.5/89.0 | 91.8/91.9| | DeBERTa-v3-base |128|86 | 88.4/85.4 | 90.6/90.7| | DeBERTa-v3-small |128|44 | 82.8/80.4 | 88.3/87.7| | **DeBERTa-v3-xsmall** |128|**22** | **84.8/82.0** | **88.1/88.3**| | DeBERTa-v3-xsmall+SiFT|128|22 | -/- | 88.4/88.5| [#| ELECTRA-small |30 |9.6 | - | - |]:: #### Fine-tuning with HF transformers ```bash #!/bin/bash cd transformers/examples/pytorch/text-classification/ pip install datasets export TASK_NAME=mnli output_dir="ds_results" num_gpus=8 batch_size=8 python -m torch.distributed.launch --nproc_per_node=${num_gpus} \ run_glue.py \ --model_name_or_path microsoft/deberta-v3-xsmall \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --evaluation_strategy steps \ --max_seq_length 256 \ --warmup_steps 1000 \ --per_device_train_batch_size ${batch_size} \ --learning_rate 4.5e-5 \ --num_train_epochs 3 \ --output_dir $output_dir \ --overwrite_output_dir \ --logging_steps 1000 \ --logging_dir $output_dir ``` ### Citation If you find DeBERTa useful for your work, please cite the following papers: ``` latex @misc{he2021debertav3, title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing}, author={Pengcheng He and Jianfeng Gao and Weizhu Chen}, year={2021}, eprint={2111.09543}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```
3b6ca50d45d1c739e3e304548b4eb8a3
ritwikm/multilingual-bert-finetuned-xquad
ritwikm
bert
10
3
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
994
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual-bert-finetuned-xquad This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 72 - eval_batch_size: 72 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
c31b576472a94b5002f08195c3d38a38
elopezlopez/distilbert-base-uncased_fold_10_binary_v1
elopezlopez
distilbert
16
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,659
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased_fold_10_binary_v1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6912 - F1: 0.7977 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 288 | 0.4002 | 0.8012 | | 0.4056 | 2.0 | 576 | 0.4372 | 0.8075 | | 0.4056 | 3.0 | 864 | 0.4720 | 0.8071 | | 0.1958 | 4.0 | 1152 | 0.8156 | 0.7980 | | 0.1958 | 5.0 | 1440 | 0.8633 | 0.8055 | | 0.0847 | 6.0 | 1728 | 0.9761 | 0.8041 | | 0.0356 | 7.0 | 2016 | 1.1816 | 0.7861 | | 0.0356 | 8.0 | 2304 | 1.2251 | 0.7918 | | 0.0215 | 9.0 | 2592 | 1.3423 | 0.7798 | | 0.0215 | 10.0 | 2880 | 1.3888 | 0.7913 | | 0.013 | 11.0 | 3168 | 1.2899 | 0.8040 | | 0.013 | 12.0 | 3456 | 1.4247 | 0.8051 | | 0.0049 | 13.0 | 3744 | 1.5436 | 0.7991 | | 0.0061 | 14.0 | 4032 | 1.5762 | 0.7991 | | 0.0061 | 15.0 | 4320 | 1.5461 | 0.7998 | | 0.0054 | 16.0 | 4608 | 1.5622 | 0.8018 | | 0.0054 | 17.0 | 4896 | 1.6658 | 0.7991 | | 0.0021 | 18.0 | 5184 | 1.6765 | 0.7972 | | 0.0021 | 19.0 | 5472 | 1.6864 | 0.7973 | | 0.0052 | 20.0 | 5760 | 1.6303 | 0.8030 | | 0.0029 | 21.0 | 6048 | 1.6631 | 0.7947 | | 0.0029 | 22.0 | 6336 | 1.6571 | 0.8006 | | 0.0027 | 23.0 | 6624 | 1.6729 | 0.7949 | | 0.0027 | 24.0 | 6912 | 1.6931 | 0.7934 | | 0.0001 | 25.0 | 7200 | 1.6912 | 0.7977 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
9197ee48e7d320879fa7734c994ad581
AlexanderPeter/bert-finetuned-ner
AlexanderPeter
bert
12
3
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,172
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0593 - eval_precision: 0.9293 - eval_recall: 0.9485 - eval_f1: 0.9388 - eval_accuracy: 0.9858 - eval_runtime: 120.5431 - eval_samples_per_second: 26.97 - eval_steps_per_second: 3.376 - epoch: 2.0 - step: 3512 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cpu - Datasets 2.2.2 - Tokenizers 0.12.1
e83143feb4dada6800e3e03a01fa1a66
north/t5_xxl_NCC_lm
north
t5
33
11
transformers
0
text2text-generation
true
false
false
apache-2.0
[False, 'nn', 'sv', 'dk', 'is', 'en']
['nbailab/NCC', 'mc4', 'wikipedia']
null
0
0
0
0
0
0
0
[]
false
true
true
8,363
false
The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation. | |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_| |:-----------|:------------:|:------------:|:------------:|:------------:|:------------:| |North-T5&#8209;NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)|| |North-T5&#8209;NCC&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|✔|| ## T5X Checkpoint The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/xxl/norwegian_NCC_plus_English_pluss100k_lm_t5x_xxl/). ## Performance A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617). |**Model:** | **F1** | |:-----------|:------------| |mT5-base|73.2 | |mBERT-base|78.4 | |NorBERT-base|78.2 | |North-T5-small|80.5 | |nb-bert-base|81.8 | |North-T5-base|85.3 | |North-T5-large|86.7 | |North-T5-xl|88.7 | |North-T5-xxl|91.8| These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal. ## Sub-versions of North-T5 The following sub-versions are available. More versions will be available shorter. |**Model** | **Description** | |:-----------|:-------| |**North&#8209;T5&#8209;NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.| |**North&#8209;T5&#8209;NCC&#8209;lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.| ## Fine-tuned versions As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab. Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used. * Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base) * DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base) ## Training details All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources. All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks. While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab. All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning. ## Formats All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format. ## Future I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users ## Thanks This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running. Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models. Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X. ## Warranty Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases. ## Contact/About These models were trained by Per E Kummervold. Please contact me on per@capia.no.
45479ae118551c95b1cb9c9950481f91
nlp-waseda/gpt2-small-japanese
nlp-waseda
gpt2
8
3,963
transformers
1
text-generation
true
false
false
cc-by-sa-4.0
['ja']
['wikipedia', 'cc100']
null
0
0
0
0
0
0
0
[]
false
true
true
2,087
false
# nlp-waseda/gpt2-small-japanese This model is Japanese GPT-2 pretrained on Japanese Wikipedia and CC-100. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. Note that the texts should be segmented into words using Juman++ in advance. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='nlp-waseda/gpt2-small-japanese') >>> set_seed(42) >>> generator("早稲田 大学 で 自然 言語 処理 を", max_length=30, do_sample=True, pad_token_id=2, num_return_sequences=5) [{'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 帰国 後 、 早稲田 大学 理工 学部 に 入学 し ます 。 卒業 後 、 早稲田 大学 工学 研究 科 、'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 学び 、 アメリカ の 大学 で 学士 号 を 取得 、 修士 の 取得 で 博士 号 を 取得 。 2008 年'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 勉強 して い ます 。 学部 は 日本 語 学科 を 専攻 して い ます 。 英語 が 話せる と いう'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 専攻 して いた 。 2011 年 に 第 26 回 日本 化学 会 学生 委員 会 奨励 賞 ( 第 2 年次 審査'}, {'generated_text': '早稲田 大学 で 自然 言語 処理 を 中心 と する 言語 学 研究 を 行って いる 。 東京 都 ・ 豊島 区 の お 見合い 相手 。'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import ReformerTokenizer, GPT2Model tokenizer = ReformerTokenizer.from_pretrained('nlp-waseda/gpt2-small-japanese') model = GPT2Model.from_pretrained('nlp-waseda/gpt2-small-japanese') text = "早稲田 大学 で 自然 言語 処理 を" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Training data The GPT-2 model was pretrained on Japanese Wikipedia, dumped on 2022-03-20, and the Japanese portion of CC-100. ## Training procedure ### Preprocessing The texts are normalized using zenhan, segmented into words using Juman++, and tokenized using SentencePiece. Juman++ 2.0.0-rc3 was used for pretraining. The model was trained on 8 NVIDIA A100 GPUs.
1373b6df7fa491e6ba58189f1ebd1de8
surrey-nlp/roberta-base-finetuned-abbr
surrey-nlp
roberta
12
72
transformers
1
token-classification
true
true
false
mit
null
['surrey-nlp/PLOD-filtered']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,973
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the [PLOD-filtered](surrey-nlp/PLOD-filtered) dataset. It achieves the following results on the evaluation set: - Loss: 0.1148 - Precision: 0.9645 - Recall: 0.9583 - F1: 0.9614 - Accuracy: 0.9576 ## Model description RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations More information needed ## Training and evaluation data The model is fine-tuned using [PLOD-Filtered](https://huggingface.co/datasets/surrey-nlp/PLOD-filtered) dataset. This dataset is used for training and evaluating the model. The PLOD Dataset is published at LREC 2022. The dataset can help build sequence labeling models for the task of Abbreviation Detection. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1179 | 1.99 | 7000 | 0.1130 | 0.9602 | 0.9517 | 0.9559 | 0.9522 | | 0.0878 | 3.98 | 14000 | 0.1106 | 0.9647 | 0.9564 | 0.9606 | 0.9567 | | 0.0724 | 5.96 | 21000 | 0.1149 | 0.9646 | 0.9582 | 0.9614 | 0.9576 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.1+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
be439bc66ca363f215f69aeda0d4b44a
google/t5-efficient-large-nl8
google
t5
12
7
transformers
0
text2text-generation
true
true
true
apache-2.0
['en']
['c4']
null
0
0
0
0
0
0
0
['deep-narrow']
false
true
true
6,253
false
# T5-Efficient-LARGE-NL8 (Deep-Narrow version) T5-Efficient-LARGE-NL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. ## Details model architecture This model checkpoint - **t5-efficient-large-nl8** - is of model type **Large** with the following variations: - **nl** is **8** It has **267.84** million parameters and thus requires *ca.* **1071.37 MB** of memory in full precision (*fp32*) or **535.69 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | #Params| | ----| ---- | ---- | ---- | ---- | ---- | ----| | Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M| | Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M| | Small | 6/6 | 2048 | 512 | 32 | 8 | 60M| | Base | 12/12 | 3072 | 768 | 64 | 12 | 220M| | Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M| | Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B| | XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B| whereas the following abbreviations are used: | Abbreviation | Definition | | ----| ---- | | nl | Number of transformer blocks (depth) | | dm | Dimension of embedding vector (output vector of transformers block) | | kv | Dimension of key/value projection matrix | | nh | Number of attention heads | | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) | | el | Number of transformer blocks in the encoder (encoder depth) | | dl | Number of transformer blocks in the decoder (decoder depth) | | sh | Signifies that attention heads are shared | | skv | Signifies that key-values projection matrices are tied | If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*. ## Pre-Training The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using the span-based masked language modeling (MLM) objective. ## Fine-Tuning **Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage. The checkpoint was pretrained in English and is therefore only useful for English NLP tasks. You can follow on of the following examples on how to fine-tune the model: *PyTorch*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) - [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *Tensorflow*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *JAX/Flax*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. ## Downstream Performance TODO: Add table if available ## Computational Complexity TODO: Add table if available ## More information We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint. As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv* model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
c0ab1967bd8f7c127afa049c883b3fea
jonatasgrosman/exp_w2v2r_fr_xls-r_gender_male-5_female-5_s286
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['fr']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'fr']
false
true
true
476
false
# exp_w2v2r_fr_xls-r_gender_male-5_female-5_s286 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
39c34f6cb5232329e68f8522a1b76308
dafraile/Clini-dialog-sum-BART
dafraile
bart
14
6
transformers
0
text2text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,115
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tst-summarization This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9975 - Rouge1: 56.239 - Rouge2: 28.9873 - Rougel: 38.5242 - Rougelsum: 53.7902 - Gen Len: 105.2973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.0 - Datasets 1.18.4 - Tokenizers 0.11.6
49afe88c6bacbd04491a1705f4a50b2b
sd-concepts-library/001glitch-core
sd-concepts-library
null
15
0
null
16
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,742
false
### 001glitch_core on Stable Diffusion This is the `001glitch_core` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![001glitch_core 0](https://huggingface.co/sd-concepts-library/001glitch-core/resolve/main/concept_images/9.jpeg) ![001glitch_core 1](https://huggingface.co/sd-concepts-library/001glitch-core/resolve/main/concept_images/6.jpeg) ![001glitch_core 2](https://huggingface.co/sd-concepts-library/001glitch-core/resolve/main/concept_images/5.jpeg) ![001glitch_core 3](https://huggingface.co/sd-concepts-library/001glitch-core/resolve/main/concept_images/0.jpeg) ![001glitch_core 4](https://huggingface.co/sd-concepts-library/001glitch-core/resolve/main/concept_images/7.jpeg) ![001glitch_core 5](https://huggingface.co/sd-concepts-library/001glitch-core/resolve/main/concept_images/4.jpeg) ![001glitch_core 6](https://huggingface.co/sd-concepts-library/001glitch-core/resolve/main/concept_images/8.jpeg) ![001glitch_core 7](https://huggingface.co/sd-concepts-library/001glitch-core/resolve/main/concept_images/1.jpeg) ![001glitch_core 8](https://huggingface.co/sd-concepts-library/001glitch-core/resolve/main/concept_images/3.jpeg) ![001glitch_core 9](https://huggingface.co/sd-concepts-library/001glitch-core/resolve/main/concept_images/2.jpeg)
e1ae88bf0f4385b6167dcc56a04e1bc5
jonatasgrosman/exp_w2v2t_pl_vp-100k_s169
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pl']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'pl']
false
true
true
475
false
# exp_w2v2t_pl_vp-100k_s169 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
140e3eb426a612b4a2fc7a08c1aaee53
brad1141/bert-finetuned-comp2
brad1141
bert
12
15
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,977
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-comp2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9570 - Precision: 0.5169 - Recall: 0.6765 - F1: 0.5820 - Accuracy: 0.5820 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.8434 | 1.0 | 934 | 0.7147 | 0.4475 | 0.6252 | 0.5096 | 0.5096 | | 0.6307 | 2.0 | 1868 | 0.5959 | 0.5058 | 0.6536 | 0.5585 | 0.5585 | | 0.4691 | 3.0 | 2802 | 0.6555 | 0.4761 | 0.6865 | 0.5521 | 0.5521 | | 0.334 | 4.0 | 3736 | 0.7211 | 0.5292 | 0.6682 | 0.5863 | 0.5863 | | 0.2326 | 5.0 | 4670 | 0.8046 | 0.4886 | 0.6865 | 0.5682 | 0.5682 | | 0.1625 | 6.0 | 5604 | 0.8650 | 0.4972 | 0.6851 | 0.5728 | 0.5728 | | 0.1195 | 7.0 | 6538 | 0.9570 | 0.5169 | 0.6765 | 0.5820 | 0.5820 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
177d0fe4d65179b551640f5c858907f3
spacy/ru_core_news_lg
spacy
null
28
38
spacy
3
token-classification
false
false
false
mit
['ru']
null
null
0
0
0
0
0
0
0
['spacy', 'token-classification']
false
true
true
34,922
false
### Details: https://spacy.io/models/ru#ru_core_news_lg Russian pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `ru_core_news_lg` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `morphologizer`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 500002 keys, 500002 unique vectors (300 dimensions) | | **Sources** | [Nerus](https://github.com/natasha/nerus) (Alexander Kukushkin)<br />[Navec](https://github.com/natasha/navec) (Alexander Kukushkin) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (900 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`morphologizer`** | `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Acc\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Degree=Pos\|POS=ADV`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=DET`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=SCONJ`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Acc\|POS=NUM`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=CCONJ`, `Case=Nom\|POS=NUM`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Gender=Masc\|Number=Sing\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Number=Plur\|POS=ADJ\|StyleVariant=Short`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Number=Plur\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Cnd\|POS=SCONJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=Third`, `POS=PART\|Polarity=Neg`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Mid`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=SPACE`, `Case=Nom\|Number=Plur\|POS=DET`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=PRON`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=INTJ`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Nom\|Number=Plur\|POS=PRON`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|StyleVariant=Short`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Gen\|POS=PRON`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET`, `Case=Nom\|POS=PRON`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=First`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|POS=AUX`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=First`, `Case=Gen\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Mid`, `Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET`, `POS=PART`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|StyleVariant=Short`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Aspect=Perf\|Gender=Neut\|Number=Sing\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|POS=NUM`, `Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Third\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=PRON\|Person=Third`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Dat\|POS=PRON`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=Third`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|StyleVariant=Short`, `Degree=Cmp\|POS=ADV`, `Aspect=Perf\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Plur\|POS=DET`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=First\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Gender=Fem\|Number=Sing\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Conv\|Voice=Act`, `Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=Second`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET`, `POS=ADV`, `Case=Acc\|POS=PRON`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Ins\|POS=NUM`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Perf\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=PRON`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=Second`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Second\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `POS=SYM`, `Degree=Cmp\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|POS=NUM`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Fem\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Degree=Pos\|POS=ADJ`, `Case=Ins\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=Third`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=PRON`, `Animacy=Anim\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=PUNCT\|StyleVariant=Short`, `Case=Ins\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=SCONJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=First`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Second\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `POS=NOUN`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=Third`, `Degree=Cmp\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Number=Plur\|POS=DET`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Anim\|Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|POS=NUM`, `Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Par\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Gen\|Number=Plur\|POS=DET\|Person=Third`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADV`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|POS=NUM`, `Aspect=Imp\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Anim\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `POS=ADV\|Polarity=Neg`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|POS=NUM`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=First`, `Case=Nom\|Gender=Neut\|POS=NUM`, `Case=Gen\|POS=VERB\|Polarity=Neg`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Second\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Number=Plur\|POS=PRON`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=Third`, `Case=Gen\|Number=Plur\|POS=PRON`, `Aspect=Perf\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `POS=CCONJ\|Polarity=Neg`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=PRON\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv\|Voice=Mid`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=Second`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Second\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Acc\|POS=NUM`, `Aspect=Imp\|Number=Plur\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=VERB\|StyleVariant=Short\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Loc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=First`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=First`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=Second`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=First`, `Foreign=Yes\|POS=PUNCT`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=PRON\|Person=Third\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=Third`, `Case=Dat\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=NUM`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=First\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Dat\|Number=Plur\|POS=DET`, `Aspect=Imp\|POS=AUX\|Tense=Pres\|VerbForm=Conv\|Voice=Act`, `Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|POS=PRON`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `POS=PROPN`, `Aspect=Perf\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=Second\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=Second`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Animacy=Anim\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=First`, `Aspect=Imp\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=Third`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PUNCT`, `Animacy=Anim\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=PRON\|Person=First`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=Second`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=PRON`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Gen\|Number=Plur\|POS=DET`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Ins\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADV`, `Foreign=Yes\|POS=PART`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=First\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=DET`, `Case=Loc\|Gender=Fem\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Conv\|Voice=Mid`, `Aspect=Imp\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PUNCT`, `Animacy=Anim\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=Third`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=PUNCT`, `Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=ADV`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=First\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Case=Nom\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=Second\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=First`, _(truncated: full list in pipeline meta)_ | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `expl`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `iobj`, `list`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `nummod:entity`, `nummod:gov`, `obj`, `obl`, `obl:agent`, `orphan`, `parataxis`, `punct`, `xcomp` | | **`ner`** | `LOC`, `ORG`, `PER` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.68 | | `TOKEN_P` | 97.28 | | `TOKEN_R` | 98.31 | | `TOKEN_F` | 97.79 | | `POS_ACC` | 98.93 | | `MORPH_ACC` | 97.49 | | `MORPH_MICRO_P` | 98.97 | | `MORPH_MICRO_R` | 98.30 | | `MORPH_MICRO_F` | 98.64 | | `SENTS_P` | 99.87 | | `SENTS_R` | 99.85 | | `SENTS_F` | 99.86 | | `DEP_UAS` | 96.22 | | `DEP_LAS` | 95.12 | | `TAG_ACC` | 98.93 | | `LEMMA_ACC` | 0.00 | | `ENTS_P` | 95.24 | | `ENTS_R` | 95.35 | | `ENTS_F` | 95.30 |
ab4353c0e75b19da665b7afbf6570ad7
facebook/levit-384
facebook
levit
5
74
transformers
0
image-classification
true
false
false
apache-2.0
null
['imagenet-1k']
null
1
0
0
1
0
0
0
['vision', 'image-classification']
false
true
true
1,290
false
# LeViT LeViT-384 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference ](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT). Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Usage Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-384') model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ```
39067912c26d54f244979b569f05b0df
aiautomationlab/german-news-title-gen-mt5
aiautomationlab
mt5
9
102
transformers
2
summarization
true
true
false
mit
['de']
null
null
0
0
0
0
1
0
1
['summarization', 'arxiv:2005.00661', 'arxiv:2111.09525', 'arxiv:2112.08542', 'arxiv:2104.04302', 'arxiv:2109.09209']
false
true
true
13,250
false
# German news title gen This is a model for the task of news headline generation in German. While this task is very similar to summarization, there remain differences like length, structure, and language style, which cause state-of-the-art summarization models not to be suited best for headline generation and demand further fine tuning on this task. For this model, [mT5-base](https://huggingface.co/google/mt5-base) by Google is used as a foundation model. **The model is still work in progress** ## Dataset & preprocessing The model was finetuned on a corpus of news articles from [BR24](https://www.br.de/) published between 2015 and 2021. The texts are in german language and cover a range of different news topics like politics, sports, and culture, with a focus on topics that are relevant to the people living in Bavaria (Germany). In a preprocessing step, article-headline pairs matching any of the following criteria were filtered out: - very short articles (number of words in text lower than 3x the number of words in the headline). - articles with headlines containing only words that are not contained in the text (lemmatized and excluding stopwords). - articles with headlines that are just the name of a known text format (e.g. "Das war der Tag" a format summarizing the most important topics of the day) Further the prefix `summarize: ` was added to all articles to make use of the pretrained summarization capabilities of mT5. After filtering the corpus contained 89098 article-headline pairs, of which 87306 were used for training, 902 for validation, and 890 for testing. ## Training After multiple test runs of finetuning the present model was further trained using the following parameters: - foundation-model: mT5-base - input_prefix: "summarize: " - num_train_epochs: 10 - learning_rate: 5e-5 - warmup_ratio: 0.3 - lr_scheduler_type: constant_with_warmup - per_device_train_batch_size: 3 - gradient_accumulation_steps: 2 - fp16: False Every 5000 steps a checkpoint is stored and evaluated on the validation set. After the training, the checkpoint with the best cross-entropy loss on the validation set is saved as the final model. ## Usage Because the model was fine tuned on mT5, the usage is analogous to the T5 model ([see docs](https://huggingface.co/docs/transformers/model_doc/t5)). Another option for using the model for inference is the huggingface [summarization pipeline](https://huggingface.co/docs/transformers/v4.23.1/en/main_classes/pipelines#transformers.SummarizationPipeline). In both cases the prefix `summarize: ` has to be added to the input texts. For obtaining higher quality headlines it is recommended to increase the beam size for genereation. In the evaluations conducted for this model a beam size of 5 was used. ### Example: Direct model evaluation ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model_id = "" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForSeq2SeqLM.from_pretrained(model_id) text = "Als Reaktion auf die Brandserie wurde am Mittwoch bei der Kriminalpolizei Würzburg eine Ermittlungskommission eingerichtet. Ich habe den Eindruck, der Brandstifter wird dreister, kommentiert Rosalinde Schraud, die Bürgermeisterin von Estenfeld, die Brandserie. Gerade die letzten beiden Brandstiftungen seien ungewöhnlich gewesen, da sie mitten am Tag und an frequentierten Straßen stattgefunden haben.Kommt der Brandstifter aus Estenfeld?Norbert Walz ist das letzte Opfer des Brandstifters von Estenfeld. Ein Unbekannter hat am Dienstagnachmittag sein Gartenhaus angezündet.Was da in seinem Kopf herumgeht, was da passiert – das ist ja unglaublich! Das kann schon jemand aus dem Ort sein, weil sich derjenige auskennt.Norbert Walz aus Estenfeld.Dass es sich beim Brandstifter wohl um einen Bürger ihrer Gemeinde handele, will die erste Bürgermeisterin von Estenfeld, Rosalinde Schraud, nicht bestätigen: In der Bevölkerung gibt es natürlich Spekulationen, an denen ich mich aber nicht beteiligen will. Laut Schraud reagiert die Bürgerschaft mit vermehrter Aufmerksamkeit auf die Brände: Man guckt mehr in die Nachbarschaft. Aufhören wird die Brandserie wohl nicht, solange der Täter nicht gefasst wird.Es wäre nicht ungewöhnlich, dass der Täter aus der Umgebung von Estenfeld stammt. Wir bitten deshalb Zeugen, die sachdienliche Hinweise sowohl zu den Bränden geben können, sich mit unserer Kriminalpolizei in Verbindung zu setzen.Philipp Hümmer, Sprecher des Polizeipräsidiums UnterfrankenFür Hinweise, die zur Ergreifung des Täters führen, hat das Bayerische Landeskriminalamt eine Belohnung von 2.000 Euro ausgesetzt." input_text = "summarize: " + text input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids, num_beams=5) generated_headline = tokenizer.decode(outputs[0], skip_special_tokens=True) print(generated_headline) ``` ### Example: Model evaluation using huggingface pipeline ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline model_id = "" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForSeq2SeqLM.from_pretrained(model_id) headline_generator = pipeline( "summarization", model=model, tokenizer=tokenizer, num_beams=5 ) text = "Als Reaktion auf die Brandserie wurde am Mittwoch bei der Kriminalpolizei Würzburg eine Ermittlungskommission eingerichtet. Ich habe den Eindruck, der Brandstifter wird dreister, kommentiert Rosalinde Schraud, die Bürgermeisterin von Estenfeld, die Brandserie. Gerade die letzten beiden Brandstiftungen seien ungewöhnlich gewesen, da sie mitten am Tag und an frequentierten Straßen stattgefunden haben.Kommt der Brandstifter aus Estenfeld?Norbert Walz ist das letzte Opfer des Brandstifters von Estenfeld. Ein Unbekannter hat am Dienstagnachmittag sein Gartenhaus angezündet.Was da in seinem Kopf herumgeht, was da passiert – das ist ja unglaublich! Das kann schon jemand aus dem Ort sein, weil sich derjenige auskennt.Norbert Walz aus Estenfeld.Dass es sich beim Brandstifter wohl um einen Bürger ihrer Gemeinde handele, will die erste Bürgermeisterin von Estenfeld, Rosalinde Schraud, nicht bestätigen: In der Bevölkerung gibt es natürlich Spekulationen, an denen ich mich aber nicht beteiligen will. Laut Schraud reagiert die Bürgerschaft mit vermehrter Aufmerksamkeit auf die Brände: Man guckt mehr in die Nachbarschaft. Aufhören wird die Brandserie wohl nicht, solange der Täter nicht gefasst wird.Es wäre nicht ungewöhnlich, dass der Täter aus der Umgebung von Estenfeld stammt. Wir bitten deshalb Zeugen, die sachdienliche Hinweise sowohl zu den Bränden geben können, sich mit unserer Kriminalpolizei in Verbindung zu setzen.Philipp Hümmer, Sprecher des Polizeipräsidiums UnterfrankenFür Hinweise, die zur Ergreifung des Täters führen, hat das Bayerische Landeskriminalamt eine Belohnung von 2.000 Euro ausgesetzt." input_text = "summarize: " + text generated_headline = headline_generator(input_text)[0]["summary_text"] print(generated_headline) ``` ## Limitations Like most state-of-the-art summarization models this model has issues with the factuality of the generated texts [^factuality]. **It is therefore strongly advised having a human fact-check the generated headlines.** An analysis of possible biases reproduced by the present model, regardless of whether they originate from our finetuning or the underlying mT5 model, is beyond the scope of this work. We assume that biases exist within the model and an analysis will be a task for future work As the model was trained on news articles from the time range 2015-2021, further biases and factual errors could emerge due to topic shifts in news articles and changes in the (e.g. political) situation. ## Evaluation The model was evaluated on a held-out test set consisting of 890 article-headline pairs. For each model the headlines were generated using beam search with a beam width of 5. ### Quantitative | model | Rouge1 | Rouge2 | RougeL | RougeLsum | |-|-|-|-|-| | [T-Systems-onsite/mt5-small-sum-de-en-v2](https://huggingface.co/T-Systems-onsite/mt5-small-sum-de-en-v2)| 0.107 | 0.0297 | 0.098 | 0.098 | | aiautomationlab/german-news-title-gen-mt5 | 0.3131 | 0.0873 | 0.1997 | 0.1997 | For evaluating the factuality of the generated headlines concerning the input text, we use 3 state-of-the-art metrics for summary evaluation (the parameters were chosen according to the recommendations from the respective papers or GitHub repositories). Because these metrics are only available for the English language the texts and generated headlines were translated from German to English using the [DeepL API](https://www.deepl.com/en/docs-api/) in an additional preprocessing step for this factuality evaluation. - **SummaC-CZ** [^summac] Yields a score between -1 and 1, representing the difference between entailment probability and contradiction probability (-1: the headline is not entailed in text and is completely contradicted by it, 1: the headline is fully entailed in text and not contradicted by it). Parameters: - `model_name`: [vitc](https://huggingface.co/tals/albert-xlarge-vitaminc-mnli) - **QAFactEval** [^qafacteval] Using Lerc Quip score, which is reported to perform best in the corresponding paper. The score yields a value between 0 and 5 representing the overlap between answers based on the headline and text to questions generated from the headline (0: no overlap, 5: perfect overlap). Parameters: - `use_lerc_quip`: True - **DAE (dependency arc entailment)** [^dae] Yields a binary value of either 0 or 1, representing whether all dependency arcs in the headline are entailed in the text (0: at least one dependency arc is not entailed, 1: all dependency arcs are entailed). Parameters: - model checkpoint: DAE_xsum_human_best_ckpt - `model_type`: model_type - `max_seq_length`: 512 Each metric is calculated for all article-headline pairs in the test set and the respective mean score over the test set is reported. | model | SummacCZ | QAFactEval | DAE | |-|-|-|-| | [T-Systems-onsite/mt5-small-sum-de-en-v2](https://huggingface.co/T-Systems-onsite/mt5-small-sum-de-en-v2) | 0.6969 | 3.3023 | 0.8292 | | aiautomationlab/german-news-title-gen-mt5 | 0.4419 | 1.9265 | 0.7438 | It can be observed that our model scores consistently lower than the T-Systems one. Following human evaluation, it seems that to match the structure and style specific to headlines the headline generation model has to be more abstractive than a model for summarization which leads to a higher frequency of hallucinations in the generated output. ### Qualitative A qualitative evaluation conducted by members of the BR AI + Automation Lab showed that the model succeeds in producing headlines that match the language and style of news headlines, but also confirms that there are issues with the factual consistency common to state-of-the-art summarization models. ## Future work Future work on this model will focus on generating headlines with higher factual consistency regarding the text. Ideas to achieve this goal include: - Use of coreference resolution as additional preprocessing step for making the relations within the text more explicit to the model. - Use of contrastive learning [^contrastive_learning] - Use of different models for different news topics, as different topics seem to be prone to different types of errors, more specialized models may be able to improve performance. - Use of factuality metric models for reranking beam search candidates in the generation step. - Perform analysis of biases included in the model [^factuality]: Maynez, Joshua, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. “On Faithfulness and Factuality in Abstractive Summarization.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 1906–19. Online: Association for Computational Linguistics, 2020. https://doi.org/10.18653/v1/2020.acl-main.173. [^summac]: Laban, Philippe, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. “SummaC: Re-Visiting NLI-Based Models for Inconsistency Detection in Summarization.” Transactions of the Association for Computational Linguistics 10 (February 9, 2022): 163–77. https://doi.org/10.1162/tacl_a_00453. Code: https://github.com/tingofurro/summac [^qafacteval]: Fabbri, Alexander R., Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. “QAFactEval: Improved QA-Based Factual Consistency Evaluation for Summarization.” arXiv, April 29, 2022. https://doi.org/10.48550/arXiv.2112.08542. Code: https://github.com/salesforce/QAFactEval [^dae]: Goyal, Tanya, and Greg Durrett. “Annotating and Modeling Fine-Grained Factuality in Summarization.” arXiv, April 9, 2021. http://arxiv.org/abs/2104.04302. Code: https://github.com/tagoyal/factuality-datasets [^contrastive_learning]: Cao, Shuyang, and Lu Wang. “CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in Abstractive Summarization.” In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 6633–49. Online and Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.emnlp-main.532.
7fbb800f7b83bd1c8e66febf5cb29902
jonatasgrosman/exp_w2v2r_en_xls-r_accent_us-0_england-10_s35
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['en']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'en']
false
true
true
475
false
# exp_w2v2r_en_xls-r_accent_us-0_england-10_s35 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
b56f78cdcaeca55a109df5eb2aaab191
sentence-transformers/allenai-specter
sentence-transformers
bert
13
2,583
sentence-transformers
5
sentence-similarity
true
true
false
apache-2.0
null
null
null
1
1
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
true
true
2,574
false
# allenai-specter This model is a conversion of the [AllenAI SPECTER](https://github.com/allenai/specter) model to [sentence-transformers](https://www.SBERT.net). It can be used to map the titles & abstracts of scientific publications to a vector space such that similar papers are close. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/allenai-specter') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/allenai-specter') model = AutoModel.from_pretrained('sentence-transformers/allenai-specter') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/allenai-specter) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors See [AllenAI SPECTER](https://github.com/allenai/specter)
3d5e5b7724896c39023a4c26bfabcb53