repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
mediabiasgroup/DA-RoBERTa-BABE
mediabiasgroup
roberta
9
11
transformers
0
text-classification
true
false
false
afl-3.0
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
692
false
# Please cite as ``` @InProceedings{Spinde2021f, title = "Neural Media Bias Detection Using Distant Supervision With {BABE} - Bias Annotations By Experts", author = "Spinde, Timo and Plank, Manuel and Krieger, Jan-David and Ruas, Terry and Gipp, Bela and Aizawa, Akiko", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.101", doi = "10.18653/v1/2021.findings-emnlp.101", pages = "1166--1177", } ```
83f5ab1ffb9b32b1c6d8fda3609d0fd7
kornwtp/ConGen-BERT-Small
kornwtp
bert
8
2
sentence-transformers
0
sentence-similarity
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
true
true
1,432
false
# kornwtp/ConGen-BERT-Small This is a [ConGen](https://github.com/KornWtp/ConGen) model: It maps sentences to a 512 dimensional dense vector space and can be used for tasks like semantic search. ## Usage Using this model becomes easy when you have [ConGen](https://github.com/KornWtp/ConGen) installed: ``` pip install -U git+https://github.com/KornWtp/ConGen.git ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('kornwtp/ConGen-BERT-Small') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [Semantic Textual Similarity](https://github.com/KornWtp/ConGen#main-results---sts) ## Citing & Authors ```bibtex @inproceedings{limkonchotiwat-etal-2022-congen, title = "{ConGen}: Unsupervised Control and Generalization Distillation For Sentence Representation", author = "Limkonchotiwat, Peerat and Ponwitayarat, Wuttikorn and Lowphansirikul, Lalita and Udomcharoenchaikit, Can and Chuangsuwanich, Ekapol and Nutanong, Sarana", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022", year = "2022", publisher = "Association for Computational Linguistics", } ```
123dcc54441f40da08e09e7484b4df1f
FritzOS/TEdetection_distiBERT_mLM_V2_shuffleplus3
FritzOS
distilbert
4
2
transformers
0
fill-mask
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,366
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # TEdetection_distiBERT_mLM_V2_shuffleplus3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 208018, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
bdf5e35c403c7efb50da8153691d4a01
coreml/coreml-Analog-Diffusion
coreml
null
6
0
null
3
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
0
1
0
1
1
0
['coreml', 'stable-diffusion', 'text-to-image']
false
true
true
1,865
false
# Core ML Converted Model This model was converted to Core ML for use on Apple Silicon devices by following Apple's instructions [here](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).<br> Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusion) to generate images.<br> `split_einsum` version is compatible with all compute unit options including Neural Engine.<br> `original` version is only compatible with CPU & GPU option. **Analog Diffusion** ![Header](https://huggingface.co/wavymulder/Analog-Diffusion/resolve/main/images/page1.jpg) [*CKPT DOWNLOAD LINK*](https://huggingface.co/wavymulder/Analog-Diffusion/resolve/main/analog-diffusion-1.0.ckpt) - This is a dreambooth model trained on a diverse set of analog photographs. In your prompt, use the activation token: `analog style` You may need to use the words `blur` `haze` `naked` in your negative prompts. My dataset did not include any NSFW material but the model seems to be pretty horny. Note that using `blur` and `haze` in your negative prompt can give a sharper image but also a less pronounced analog film effect. Trained from 1.5 with VAE. Please see [this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images.](https://huggingface.co/wavymulder/Analog-Diffusion/resolve/main/parameters_used_examples.txt) ## Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Analog-Diffusion: [Open in Spaces](https://huggingface.co/spaces/akhaliq/Analog-Diffusion) ![Environments Example](https://huggingface.co/wavymulder/Analog-Diffusion/resolve/main/images/page2.jpg) ![Characters Example](https://huggingface.co/wavymulder/Analog-Diffusion/resolve/main/images/page3.jpg) Here's a [link to non-cherrypicked batches.](https://imgur.com/a/7iOgTFv)
33390b8f0a80a7d072a48c74e21d3fff
OAOA/DifFace
OAOA
null
35
82
diffusers
0
null
true
false
false
other
null
null
null
0
0
0
0
0
0
0
['pytorch', 'diffusers', 'face image enhancement']
false
true
true
3,282
false
# DifFace: Blind Face Restoration with Diffused Error Contraction **Paper**: [DifFace: Blind Face Restoration with Diffused Error Contraction](https://arxiv.org/abs/2212.06512) **Authors**: Zongsheng Yue, Chen Change Loy **Abstract**: *While deep learning-based methods for blind face restoration have achieved unprecedented success, they still suffer from two major limitations. First, most of them deteriorate when facing complex degradations out of their training data. Second, these methods require multiple constraints, e.g., fidelity, perceptual, and adversarial losses, which require laborious hyper-parameter tuning to stabilize and balance their influences. In this work, we propose a novel method named DifFace that is capable of coping with unseen and complex degradations more gracefully without complicated loss designs. The key of our method is to establish a posterior distribution from the observed low-quality (LQ) image to its high-quality (HQ) counterpart. In particular, we design a transition distribution from the LQ image to the intermediate state of a pre-trained diffusion model and then gradually transmit from this intermediate state to the HQ target by recursively applying a pre-trained diffusion model. The transition distribution only relies on a restoration backbone that is trained with L2 loss on some synthetic data, which favorably avoids the cumbersome training process in existing methods. Moreover, the transition distribution can contract the error of the restoration backbone and thus makes our method more robust to unknown degradations. Comprehensive experiments show that DifFace is superior to current state-of-the-art methods, especially in cases with severe degradations.* ## Inference ```python # !pip install diffusers from diffusers import DifFacePipeline model_id = "OAOA/DifFace" # load model and scheduler pipe = DifFacePipeline.from_pretrained(model_id) pipe = pipe.to("cuda") im_lr = cv2.imread(im_path) # read the low quality face image im_sr = pipe(im_lr, num_inference_steps=250, started_steps=100, aligned=True)['images'][0] image[0].save("restorated_difface.png") # save the result ``` <!--For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)--> ## Training If you want to train your own model, please have a look at the [official training example](https://github.com/zsyOAOA/DifFace). ## Samples [<img src="assets/Solvay_conference.png" width="805px"/>](https://imgsli.com/MTM5NTgw) [<img src="assets/Hepburn.png" height="555px" width="400px"/>](https://imgsli.com/MTM5NTc5) [<img src="assets/oldimg_05.png" height="555px" width="400px"/>](https://imgsli.com/MTM5NTgy) <img src="cropped_faces/0368.png" height="200px" width="200px"/><img src="assets/0368.png" height="200px" width="200px"/> <img src="cropped_faces/0885.png" height="200px" width="200px"/><img src="assets/0885.png" height="200px" width="200px"/> <img src="cropped_faces/0729.png" height="200px" width="200px"/><img src="assets/0729.png" height="200px" width="200px"/> <img src="cropped_faces/0934.png" height="200px" width="200px"/><img src="assets/0934.png" height="200px" width="200px"/>
cd6cae50085efaf60bb501b97ab5c210
SEUNGWON1/distilroberta-base-finetuned-wikitext2
SEUNGWON1
roberta
9
4
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,267
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0843 | 1.0 | 2406 | 1.9226 | | 1.9913 | 2.0 | 4812 | 1.8820 | | 1.9597 | 3.0 | 7218 | 1.8214 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
204d417a3d37fc79af071a9f54f15d96
irfan-noordin/segformer-b0-finetuned-segments-sidewalk-oct-22
irfan-noordin
segformer
9
9
transformers
0
image-segmentation
true
false
false
other
null
null
null
0
0
0
0
0
0
0
['vision', 'image-segmentation', 'generated_from_trainer']
true
true
true
78,218
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-segments-sidewalk-oct-22 This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset. It achieves the following results on the evaluation set: - Loss: 0.9249 - Mean Iou: 0.1675 - Mean Accuracy: 0.2109 - Overall Accuracy: 0.7776 - Accuracy Unlabeled: nan - Accuracy Flat-road: 0.8631 - Accuracy Flat-sidewalk: 0.9423 - Accuracy Flat-crosswalk: 0.0 - Accuracy Flat-cyclinglane: 0.4704 - Accuracy Flat-parkingdriveway: 0.1421 - Accuracy Flat-railtrack: 0.0 - Accuracy Flat-curb: 0.0061 - Accuracy Human-person: 0.0 - Accuracy Human-rider: 0.0 - Accuracy Vehicle-car: 0.8937 - Accuracy Vehicle-truck: 0.0 - Accuracy Vehicle-bus: 0.0 - Accuracy Vehicle-tramtrain: 0.0 - Accuracy Vehicle-motorcycle: 0.0 - Accuracy Vehicle-bicycle: 0.0 - Accuracy Vehicle-caravan: 0.0 - Accuracy Vehicle-cartrailer: 0.0 - Accuracy Construction-building: 0.9143 - Accuracy Construction-door: 0.0 - Accuracy Construction-wall: 0.0055 - Accuracy Construction-fenceguardrail: 0.0 - Accuracy Construction-bridge: 0.0 - Accuracy Construction-tunnel: nan - Accuracy Construction-stairs: 0.0 - Accuracy Object-pole: 0.0 - Accuracy Object-trafficsign: 0.0 - Accuracy Object-trafficlight: 0.0 - Accuracy Nature-vegetation: 0.9291 - Accuracy Nature-terrain: 0.8710 - Accuracy Sky: 0.9207 - Accuracy Void-ground: 0.0 - Accuracy Void-dynamic: 0.0 - Accuracy Void-static: 0.0 - Accuracy Void-unclear: 0.0 - Iou Unlabeled: nan - Iou Flat-road: 0.6127 - Iou Flat-sidewalk: 0.8192 - Iou Flat-crosswalk: 0.0 - Iou Flat-cyclinglane: 0.4256 - Iou Flat-parkingdriveway: 0.1262 - Iou Flat-railtrack: 0.0 - Iou Flat-curb: 0.0061 - Iou Human-person: 0.0 - Iou Human-rider: 0.0 - Iou Vehicle-car: 0.6655 - Iou Vehicle-truck: 0.0 - Iou Vehicle-bus: 0.0 - Iou Vehicle-tramtrain: 0.0 - Iou Vehicle-motorcycle: 0.0 - Iou Vehicle-bicycle: 0.0 - Iou Vehicle-caravan: 0.0 - Iou Vehicle-cartrailer: 0.0 - Iou Construction-building: 0.5666 - Iou Construction-door: 0.0 - Iou Construction-wall: 0.0054 - Iou Construction-fenceguardrail: 0.0 - Iou Construction-bridge: 0.0 - Iou Construction-tunnel: nan - Iou Construction-stairs: 0.0 - Iou Object-pole: 0.0 - Iou Object-trafficsign: 0.0 - Iou Object-trafficlight: 0.0 - Iou Nature-vegetation: 0.7875 - Iou Nature-terrain: 0.6912 - Iou Sky: 0.8218 - Iou Void-ground: 0.0 - Iou Void-dynamic: 0.0 - Iou Void-static: 0.0 - Iou Void-unclear: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:| | 2.832 | 0.05 | 20 | 3.1768 | 0.0700 | 0.1095 | 0.5718 | nan | 0.1365 | 0.9472 | 0.0019 | 0.0006 | 0.0004 | 0.0 | 0.0205 | 0.0 | 0.0 | 0.2074 | 0.0 | 0.0 | 0.0 | 0.0017 | 0.0001 | 0.0 | 0.0 | 0.7360 | 0.0 | 0.0235 | 0.0050 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9559 | 0.0429 | 0.5329 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1260 | 0.5906 | 0.0016 | 0.0006 | 0.0004 | 0.0 | 0.0175 | 0.0 | 0.0 | 0.2006 | 0.0 | 0.0 | 0.0 | 0.0003 | 0.0001 | 0.0 | 0.0 | 0.3729 | 0.0 | 0.0209 | 0.0044 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5778 | 0.0408 | 0.4932 | 0.0009 | 0.0 | 0.0 | 0.0 | | 2.3224 | 0.1 | 40 | 2.4686 | 0.0885 | 0.1321 | 0.6347 | nan | 0.5225 | 0.9260 | 0.0005 | 0.0001 | 0.0006 | 0.0 | 0.0113 | 0.0 | 0.0 | 0.3738 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8191 | 0.0 | 0.0263 | 0.0012 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9649 | 0.0701 | 0.6434 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4240 | 0.6602 | 0.0005 | 0.0001 | 0.0006 | 0.0 | 0.0109 | 0.0 | 0.0 | 0.3292 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3962 | 0.0 | 0.0260 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6019 | 0.0617 | 0.5862 | 0.0001 | 0.0 | 0.0 | 0.0 | | 2.1961 | 0.15 | 60 | 1.9886 | 0.0988 | 0.1431 | 0.6500 | nan | 0.5168 | 0.9319 | 0.0 | 0.0001 | 0.0000 | 0.0 | 0.0032 | 0.0 | 0.0 | 0.5761 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8325 | 0.0 | 0.0132 | 0.0003 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9612 | 0.1260 | 0.7625 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.3929 | 0.6721 | 0.0 | 0.0001 | 0.0000 | 0.0 | 0.0032 | 0.0 | 0.0 | 0.4609 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4375 | 0.0 | 0.0131 | 0.0002 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6342 | 0.1108 | 0.6353 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.2964 | 0.2 | 80 | 2.0597 | 0.1066 | 0.1503 | 0.6682 | nan | 0.6577 | 0.9207 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0044 | 0.0 | 0.0 | 0.5257 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8466 | 0.0 | 0.0094 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9526 | 0.2022 | 0.8392 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4276 | 0.7093 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0044 | 0.0 | 0.0 | 0.4438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4488 | 0.0 | 0.0093 | 0.0001 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6560 | 0.1833 | 0.7408 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.9751 | 0.25 | 100 | 1.7493 | 0.1186 | 0.1645 | 0.6944 | nan | 0.7604 | 0.9146 | 0.0 | 0.0004 | 0.0012 | 0.0 | 0.0016 | 0.0 | 0.0 | 0.7381 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8273 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9636 | 0.3289 | 0.8909 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4904 | 0.7490 | 0.0 | 0.0004 | 0.0012 | 0.0 | 0.0016 | 0.0 | 0.0 | 0.5465 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.4913 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.6542 | 0.2761 | 0.7004 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.7626 | 0.3 | 120 | 1.5608 | 0.1295 | 0.1752 | 0.7118 | nan | 0.8168 | 0.9102 | 0.0 | 0.0002 | 0.0025 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8094 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8362 | 0.0 | 0.0030 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9492 | 0.5677 | 0.8861 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.4958 | 0.7592 | 0.0 | 0.0002 | 0.0025 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.5680 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5095 | 0.0 | 0.0030 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7082 | 0.4878 | 0.7392 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.32 | 0.35 | 140 | 1.5048 | 0.1323 | 0.1797 | 0.7181 | nan | 0.7883 | 0.9260 | 0.0 | 0.0000 | 0.0037 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8711 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8590 | 0.0 | 0.0022 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9128 | 0.7088 | 0.8576 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5141 | 0.7598 | 0.0 | 0.0000 | 0.0037 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.5287 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5016 | 0.0 | 0.0022 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7458 | 0.5602 | 0.7499 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.6464 | 0.4 | 160 | 1.3886 | 0.1342 | 0.1783 | 0.7217 | nan | 0.7859 | 0.9390 | 0.0 | 0.0 | 0.0059 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7401 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8508 | 0.0 | 0.0010 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9368 | 0.7223 | 0.9025 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5173 | 0.7561 | 0.0 | 0.0 | 0.0058 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5059 | 0.0 | 0.0010 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7366 | 0.5802 | 0.7401 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4757 | 0.45 | 180 | 1.3649 | 0.1367 | 0.1840 | 0.7255 | nan | 0.8587 | 0.9185 | 0.0 | 0.0001 | 0.0039 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8588 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8337 | 0.0 | 0.0014 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9036 | 0.7809 | 0.9138 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5077 | 0.7693 | 0.0 | 0.0001 | 0.0039 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5980 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5264 | 0.0 | 0.0014 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7521 | 0.6078 | 0.7438 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.0018 | 0.5 | 200 | 1.3118 | 0.1353 | 0.1839 | 0.7242 | nan | 0.7797 | 0.9457 | 0.0 | 0.0029 | 0.0057 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8345 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8509 | 0.0 | 0.0018 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8704 | 0.8688 | 0.9069 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5321 | 0.7602 | 0.0 | 0.0029 | 0.0057 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6060 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5276 | 0.0 | 0.0018 | 0.0001 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7133 | 0.5551 | 0.7593 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4636 | 0.55 | 220 | 1.2729 | 0.1330 | 0.1797 | 0.7249 | nan | 0.8619 | 0.9203 | 0.0 | 0.0015 | 0.0067 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8903 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8514 | 0.0 | 0.0031 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9447 | 0.5448 | 0.9040 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5249 | 0.7844 | 0.0 | 0.0015 | 0.0066 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5735 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5336 | 0.0 | 0.0031 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7136 | 0.4869 | 0.7613 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.1856 | 0.6 | 240 | 1.2551 | 0.1382 | 0.1828 | 0.7274 | nan | 0.7497 | 0.9518 | 0.0 | 0.0005 | 0.0048 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8893 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8153 | 0.0 | 0.0048 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9475 | 0.7597 | 0.9107 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5097 | 0.7477 | 0.0 | 0.0005 | 0.0047 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6172 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5527 | 0.0 | 0.0048 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7293 | 0.6250 | 0.7703 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4577 | 0.65 | 260 | 1.1862 | 0.1387 | 0.1848 | 0.7304 | nan | 0.8842 | 0.9065 | 0.0 | 0.0001 | 0.0024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8566 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8632 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9442 | 0.7313 | 0.9080 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5121 | 0.7833 | 0.0 | 0.0001 | 0.0024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6297 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5381 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7437 | 0.6199 | 0.7486 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.0748 | 0.7 | 280 | 1.2000 | 0.1391 | 0.1846 | 0.7301 | nan | 0.7249 | 0.9690 | 0.0 | 0.0005 | 0.0064 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8909 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8656 | 0.0 | 0.0014 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.8917 | 0.8362 | 0.9065 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5306 | 0.7403 | 0.0 | 0.0005 | 0.0063 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6223 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5491 | 0.0 | 0.0014 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7566 | 0.6061 | 0.7761 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.642 | 0.75 | 300 | 1.1452 | 0.1432 | 0.1880 | 0.7409 | nan | 0.8682 | 0.9389 | 0.0 | 0.0030 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8605 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8759 | 0.0 | 0.0020 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9092 | 0.8515 | 0.8892 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5333 | 0.7905 | 0.0 | 0.0030 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6393 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5418 | 0.0 | 0.0020 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7655 | 0.6551 | 0.7893 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.2166 | 0.8 | 320 | 1.1450 | 0.1388 | 0.1849 | 0.7391 | nan | 0.8516 | 0.9460 | 0.0 | 0.0043 | 0.0060 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8944 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8803 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9283 | 0.6849 | 0.9071 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5584 | 0.7932 | 0.0 | 0.0043 | 0.0060 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.5844 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5259 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7548 | 0.5985 | 0.7549 | 0.0 | 0.0 | 0.0 | 0.0 | | 2.1346 | 0.85 | 340 | 1.1215 | 0.1428 | 0.1887 | 0.7411 | nan | 0.7956 | 0.9551 | 0.0 | 0.0145 | 0.0098 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.8646 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8884 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9131 | 0.8828 | 0.9024 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5611 | 0.7721 | 0.0 | 0.0145 | 0.0097 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.6313 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5405 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7563 | 0.6337 | 0.7917 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.8351 | 0.9 | 360 | 1.1012 | 0.1433 | 0.1896 | 0.7449 | nan | 0.8723 | 0.9432 | 0.0 | 0.0025 | 0.0114 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8822 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8662 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9213 | 0.8361 | 0.9201 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5472 | 0.7989 | 0.0 | 0.0025 | 0.0113 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6277 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5416 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7666 | 0.6674 | 0.7664 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.152 | 0.95 | 380 | 1.1045 | 0.1452 | 0.1891 | 0.7453 | nan | 0.8827 | 0.9332 | 0.0 | 0.0457 | 0.0124 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8396 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8848 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9399 | 0.7910 | 0.9107 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5462 | 0.7966 | 0.0 | 0.0457 | 0.0123 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6494 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5395 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7636 | 0.6627 | 0.7763 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.2062 | 1.0 | 400 | 1.0607 | 0.1469 | 0.1897 | 0.7482 | nan | 0.8192 | 0.9644 | 0.0 | 0.0944 | 0.0198 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8406 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8821 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9193 | 0.8054 | 0.9137 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5772 | 0.7742 | 0.0 | 0.0941 | 0.0195 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6414 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5360 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7740 | 0.6591 | 0.7710 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.0116 | 1.05 | 420 | 1.0503 | 0.1493 | 0.1950 | 0.7554 | nan | 0.8686 | 0.9478 | 0.0 | 0.2033 | 0.0295 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9166 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8409 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9414 | 0.7667 | 0.9196 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5809 | 0.8022 | 0.0 | 0.1995 | 0.0287 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5916 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5517 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7628 | 0.6441 | 0.7652 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.009 | 1.1 | 440 | 1.0723 | 0.1529 | 0.1958 | 0.7553 | nan | 0.7797 | 0.9670 | 0.0 | 0.2214 | 0.0547 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8978 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8927 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9274 | 0.8016 | 0.9176 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5898 | 0.7717 | 0.0 | 0.2157 | 0.0526 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6389 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5499 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7760 | 0.6697 | 0.7818 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.1496 | 1.15 | 460 | 1.0417 | 0.1571 | 0.2017 | 0.7607 | nan | 0.7736 | 0.9645 | 0.0 | 0.3606 | 0.0669 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8775 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8801 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9098 | 0.8906 | 0.9326 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6102 | 0.7737 | 0.0 | 0.3374 | 0.0634 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5538 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7682 | 0.6437 | 0.7772 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4669 | 1.2 | 480 | 1.0161 | 0.1566 | 0.2024 | 0.7637 | nan | 0.8236 | 0.9531 | 0.0 | 0.3507 | 0.0584 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.9165 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8675 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9263 | 0.8597 | 0.9222 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6005 | 0.7983 | 0.0 | 0.3296 | 0.0556 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6153 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5498 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7752 | 0.6654 | 0.7770 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.075 | 1.25 | 500 | 1.0124 | 0.1556 | 0.2000 | 0.7634 | nan | 0.8521 | 0.9499 | 0.0 | 0.3154 | 0.0410 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8944 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8618 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9442 | 0.8133 | 0.9290 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5910 | 0.8068 | 0.0 | 0.2992 | 0.0394 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6338 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5507 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7689 | 0.6697 | 0.7737 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.888 | 1.3 | 520 | 0.9797 | 0.1597 | 0.2028 | 0.7677 | nan | 0.8590 | 0.9472 | 0.0 | 0.3534 | 0.0469 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.8900 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8807 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9379 | 0.8578 | 0.9187 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5908 | 0.8056 | 0.0 | 0.3311 | 0.0448 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.6598 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5676 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7712 | 0.6912 | 0.8088 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.8099 | 1.35 | 540 | 0.9760 | 0.1589 | 0.2026 | 0.7678 | nan | 0.8526 | 0.9534 | 0.0 | 0.3370 | 0.0313 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.9235 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8862 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9252 | 0.8551 | 0.9206 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5954 | 0.8014 | 0.0 | 0.3188 | 0.0303 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.6382 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5706 | 0.0 | 0.0005 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7830 | 0.6934 | 0.8122 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.1998 | 1.4 | 560 | 0.9815 | 0.1578 | 0.2030 | 0.7631 | nan | 0.8956 | 0.9250 | 0.0 | 0.3267 | 0.0461 | 0.0 | 0.0004 | 0.0 | 0.0 | 0.8929 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8956 | 0.0 | 0.0002 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9206 | 0.8669 | 0.9275 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5656 | 0.8136 | 0.0 | 0.3102 | 0.0440 | 0.0 | 0.0004 | 0.0 | 0.0 | 0.6574 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5524 | 0.0 | 0.0002 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7894 | 0.6940 | 0.7818 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.5591 | 1.45 | 580 | 0.9654 | 0.1618 | 0.2043 | 0.7698 | nan | 0.8198 | 0.9655 | 0.0 | 0.3715 | 0.0848 | 0.0 | 0.0003 | 0.0 | 0.0 | 0.8935 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8965 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9146 | 0.8730 | 0.9198 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6182 | 0.7898 | 0.0 | 0.3467 | 0.0792 | 0.0 | 0.0003 | 0.0 | 0.0 | 0.6590 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5647 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7871 | 0.6835 | 0.8101 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.861 | 1.5 | 600 | 0.9622 | 0.1607 | 0.2045 | 0.7689 | nan | 0.8163 | 0.9648 | 0.0 | 0.3780 | 0.0907 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.9187 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8714 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9229 | 0.8485 | 0.9361 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6180 | 0.7903 | 0.0 | 0.3541 | 0.0844 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.6307 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5609 | 0.0 | 0.0006 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7854 | 0.6904 | 0.7884 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.8335 | 1.55 | 620 | 0.9569 | 0.1598 | 0.2050 | 0.7686 | nan | 0.8421 | 0.9561 | 0.0 | 0.3493 | 0.0928 | 0.0 | 0.0012 | 0.0 | 0.0 | 0.9261 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8753 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9172 | 0.8688 | 0.9335 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6069 | 0.8031 | 0.0 | 0.3306 | 0.0860 | 0.0 | 0.0012 | 0.0 | 0.0 | 0.6123 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5618 | 0.0 | 0.0013 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7851 | 0.6911 | 0.7950 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.9988 | 1.6 | 640 | 0.9337 | 0.1611 | 0.2050 | 0.7711 | nan | 0.8595 | 0.9538 | 0.0 | 0.3512 | 0.0928 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.8962 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8854 | 0.0 | 0.0004 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9281 | 0.8594 | 0.9367 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6062 | 0.8105 | 0.0 | 0.3310 | 0.0868 | 0.0 | 0.0006 | 0.0 | 0.0 | 0.6565 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5596 | 0.0 | 0.0004 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7819 | 0.6958 | 0.7880 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.966 | 1.65 | 660 | 0.9322 | 0.1612 | 0.2051 | 0.7707 | nan | 0.8706 | 0.9494 | 0.0 | 0.3470 | 0.0997 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.8905 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8722 | 0.0 | 0.0016 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9347 | 0.8652 | 0.9364 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.5953 | 0.8136 | 0.0 | 0.3281 | 0.0922 | 0.0 | 0.0005 | 0.0 | 0.0 | 0.6654 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5696 | 0.0 | 0.0016 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7756 | 0.6890 | 0.7885 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.2154 | 1.7 | 680 | 0.9373 | 0.1611 | 0.2048 | 0.7710 | nan | 0.8448 | 0.9577 | 0.0 | 0.3717 | 0.1010 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.9173 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8613 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9411 | 0.8371 | 0.9246 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6096 | 0.8056 | 0.0 | 0.3487 | 0.0930 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.6272 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5696 | 0.0 | 0.0026 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7762 | 0.6911 | 0.7931 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.7979 | 1.75 | 700 | 0.9429 | 0.1622 | 0.2067 | 0.7717 | nan | 0.8496 | 0.9548 | 0.0 | 0.3821 | 0.1182 | 0.0 | 0.0013 | 0.0 | 0.0 | 0.9071 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8803 | 0.0 | 0.0043 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9202 | 0.8812 | 0.9204 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6104 | 0.8088 | 0.0 | 0.3583 | 0.1074 | 0.0 | 0.0013 | 0.0 | 0.0 | 0.6410 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5675 | 0.0 | 0.0043 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7784 | 0.6767 | 0.7994 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.8366 | 1.8 | 720 | 0.9379 | 0.1645 | 0.2075 | 0.7745 | nan | 0.8359 | 0.9580 | 0.0 | 0.4130 | 0.1275 | 0.0 | 0.0021 | 0.0 | 0.0 | 0.8998 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8704 | 0.0 | 0.0088 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.9450 | 0.8617 | 0.9251 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6227 | 0.8035 | 0.0 | 0.3850 | 0.1147 | 0.0 | 0.0021 | 0.0 | 0.0 | 0.6544 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5777 | 0.0 | 0.0088 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.7682 | 0.6867 | 0.8055 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.0448 | 1.85 | 740 | 0.9419 | 0.1659 | 0.2087 | 0.7769 | nan | 0.8483 | 0.9532 | 0.0 | 0.4442 | 0.1387 | 0.0 | 0.0028 | 0.0 | 0.0 | 0.8986 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8865 | 0.0 | 0.0042 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9458 | 0.8442 | 0.9215 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6240 | 0.8122 | 0.0 | 0.4077 | 0.1237 | 0.0 | 0.0028 | 0.0 | 0.0 | 0.6529 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5700 | 0.0 | 0.0041 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7767 | 0.6938 | 0.8070 | 0.0 | 0.0 | 0.0 | 0.0 | | 0.9737 | 1.9 | 760 | 0.9193 | 0.1664 | 0.2082 | 0.7772 | nan | 0.8420 | 0.9586 | 0.0 | 0.4353 | 0.1193 | 0.0 | 0.0010 | 0.0 | 0.0 | 0.9082 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8955 | 0.0 | 0.0079 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9385 | 0.8464 | 0.9190 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6232 | 0.8053 | 0.0 | 0.4022 | 0.1088 | 0.0 | 0.0010 | 0.0 | 0.0 | 0.6549 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5766 | 0.0 | 0.0079 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7843 | 0.7077 | 0.8180 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.0716 | 1.95 | 780 | 0.9170 | 0.1672 | 0.2098 | 0.7785 | nan | 0.8434 | 0.9539 | 0.0 | 0.4671 | 0.1283 | 0.0 | 0.0037 | 0.0 | 0.0 | 0.9012 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.8984 | 0.0 | 0.0058 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9398 | 0.8661 | 0.9157 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6242 | 0.8106 | 0.0 | 0.4232 | 0.1156 | 0.0 | 0.0037 | 0.0 | 0.0 | 0.6631 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5777 | 0.0 | 0.0057 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7811 | 0.6920 | 0.8223 | 0.0 | 0.0 | 0.0 | 0.0 | | 1.4144 | 2.0 | 800 | 0.9249 | 0.1675 | 0.2109 | 0.7776 | nan | 0.8631 | 0.9423 | 0.0 | 0.4704 | 0.1421 | 0.0 | 0.0061 | 0.0 | 0.0 | 0.8937 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.9143 | 0.0 | 0.0055 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.9291 | 0.8710 | 0.9207 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.6127 | 0.8192 | 0.0 | 0.4256 | 0.1262 | 0.0 | 0.0061 | 0.0 | 0.0 | 0.6655 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5666 | 0.0 | 0.0054 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7875 | 0.6912 | 0.8218 | 0.0 | 0.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.6.1 - Tokenizers 0.12.1
a5c06b28f2ae5e684024c5083e3c215f
aajrami/bert-mlm-small
aajrami
roberta
9
6
transformers
0
feature-extraction
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['bert']
false
true
true
809
false
## bert-mlm-small A small-size BERT Language Model with an **MLM** pre-training objective. For more details about the pre-training objective and the pre-training hyperparameters, please refer to [How does the pre-training objective affect what large language models learn about linguistic properties?](https://aclanthology.org/2022.acl-short.16/) ## License CC BY 4.0 ## Citation If you use this model, please cite the following paper: ``` @inproceedings{alajrami2022does, title={How does the pre-training objective affect what large language models learn about linguistic properties?}, author={Alajrami, Ahmed and Aletras, Nikolaos}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)}, pages={131--147}, year={2022} } ```
a02af933518036cfa3cb079203dc6bfd
gokuls/mobilebert_sa_GLUE_Experiment_qnli
gokuls
mobilebert
23
6
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,680
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_qnli This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6487 - Accuracy: 0.6094 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6754 | 1.0 | 819 | 0.6491 | 0.6178 | | 0.6369 | 2.0 | 1638 | 0.6487 | 0.6094 | | 0.6125 | 3.0 | 2457 | 0.6555 | 0.6088 | | 0.5942 | 4.0 | 3276 | 0.6647 | 0.6028 | | 0.5805 | 5.0 | 4095 | 0.6735 | 0.5934 | | 0.5689 | 6.0 | 4914 | 0.6893 | 0.5978 | | 0.5587 | 7.0 | 5733 | 0.7055 | 0.5896 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.8.0 - Tokenizers 0.13.2
3abb13f57a674d451274f36da239fa58
alirezafarashah/wav2vec2-base-ks-2sec
alirezafarashah
wav2vec2
10
3
transformers
0
audio-classification
true
false
false
apache-2.0
null
['superb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,555
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-ks-2sec This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0880 - Accuracy: 0.9822 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 0.5003 | 1.0 | 399 | 0.9643 | 0.4284 | | 0.1868 | 2.0 | 798 | 0.9748 | 0.1628 | | 0.1413 | 3.0 | 1197 | 0.9796 | 0.1128 | | 0.1021 | 4.0 | 1596 | 0.9813 | 0.0940 | | 0.1089 | 5.0 | 1995 | 0.0880 | 0.9822 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
7f8f3ac3c4ef317d84436282eb816e0e
kuttersn/gpt2-finetuned-redditComments
kuttersn
gpt2
9
4
transformers
0
text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,252
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetuned-redditComments This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.8418 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.9535 | 1.0 | 4320 | 3.8888 | | 3.8832 | 2.0 | 8640 | 3.8523 | | 3.8708 | 3.0 | 12960 | 3.8418 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
f1beaeec6b92a0a529b4faae79a82e56
PiyarSquare/stable_diffusion_silz
PiyarSquare
null
8
0
null
17
null
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,250
false
# 📜🗡️ Silhouette/Cricut style This is a fine-tuned Stable Diffusion model designed for cutting machines. Use **silz style** in your prompts. ### Sample images: ![silz.jpg](https://huggingface.co/PiyarSquare/stable_diffusion_silz/resolve/main/silz_characters.png) ![silz.jpg](https://huggingface.co/PiyarSquare/stable_diffusion_silz/resolve/main/silz_famous_people.png) ![silz.jpg](https://huggingface.co/PiyarSquare/stable_diffusion_silz/resolve/main/silz_animals.png) ![silz.jpg](https://huggingface.co/PiyarSquare/stable_diffusion_silz/resolve/main/silz_places.png) ![silz.jpg](https://huggingface.co/PiyarSquare/stable_diffusion_silz/resolve/main/silz_prompted.png) Based on StableDiffusion 1.5 model ### Training Made with [automatic1111 webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) + [d8ahazard dreambooth extension](https://github.com/d8ahazard/sd_dreambooth_extension) + [nitrosocke guide](https://github.com/nitrosocke/dreambooth-training-guide). 82 training images at 1e-6 learning rate for 8200 steps. Without prior preservation. Inspired by [Fictiverse's PaperCut model](https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model) and [txt2vector script](https://github.com/GeorgLegato/Txt2Vectorgraphics).
9a0e51931ab33e6af275a3274cf9ef6a
KoichiYasuoka/roberta-large-english-upos
KoichiYasuoka
roberta
10
1,915
transformers
1
token-classification
true
false
false
cc-by-sa-4.0
['en']
['universal_dependencies']
null
0
0
0
0
0
0
0
['english', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
865
false
# roberta-large-english-upos ## Model Description This is a RoBERTa model pre-trained with [UD_English](https://universaldependencies.org/en/) for POS-tagging and dependency-parsing, derived from [roberta-large](https://huggingface.co/roberta-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-english-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-english-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-large-english-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
74a7b4eb153a7e3825cd71b91d60dddd
emfa/danish-bert-botxo-danish-finetuned-hatespeech
emfa
bert
17
3
transformers
0
text-classification
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,562
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # danish-bert-botxo-danish-finetuned-hatespeech This model is for a university project and is uploaded for sharing between students. It is training on a danish hate speech labeled training set. Feel free to use it, but as of now, we don't promise any good results ;-) This model is a fine-tuned version of [Maltehb/danish-bert-botxo](https://huggingface.co/Maltehb/danish-bert-botxo) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3584 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 315 | 0.3285 | | 0.2879 | 2.0 | 630 | 0.3288 | | 0.2879 | 3.0 | 945 | 0.3178 | | 0.1371 | 4.0 | 1260 | 0.3584 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
02ba40c2482b516cd100785d164ef345
ytsai25/bert-finetuned-ner
ytsai25
bert
8
6
transformers
0
token-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,423
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ytsai25/bert-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0240 - Validation Loss: 0.0613 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1218 | 0.0592 | 0 | | 0.0398 | 0.0602 | 1 | | 0.0240 | 0.0613 | 2 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
7c42eac51670c00284bc94a39c06f3df
azizbarank/distilbert-base-turkish-cased-sentiment
azizbarank
distilbert
13
40
transformers
0
text-classification
true
false
false
mit
null
['sepidmnorozy/Turkish_sentiment']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,163
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-turkish-cased-sentiment This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on the [sepidmnorozy/Turkish_sentiment](https://huggingface.co/datasets/sepidmnorozy/Turkish_sentiment) dataset. It achieves the following results on the evaluation set: - Loss: 0.4141 - Accuracy: 0.855 - F1: 0.8797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.9.1 - Datasets 2.5.1 - Tokenizers 0.12.1
8274b9782c49955efaea4329ff7e3986
joaoalvarenga/model-sid-voxforge-cv-cetuc-0
joaoalvarenga
wav2vec2
10
7
transformers
0
automatic-speech-recognition
true
false
true
apache-2.0
['pt']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'speech', 'wav2vec2', 'pt', 'apache-2.0', 'portuguese-speech-corpus', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week', 'PyTorch']
true
true
true
3,410
false
# Wav2Vec2-Large-XLSR-53-Portuguese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "pt", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Portuguese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "pt", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-portuguese-a") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result (wer)**: 15.037146% ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found at: https://github.com/joaoalvarenga/wav2vec2-large-xlsr-53-portuguese/blob/main/fine-tuning.py
af40d28e8224ff34004643c2ee3980f1
Raccourci/xlm-sustainability-sentiment
Raccourci
xlm-roberta
19
12
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,949
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-sustainability-sentiment This model is a fine-tuned version of [Raccourci/fairguest-bert](https://huggingface.co/Raccourci/fairguest-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2997 - F1: 0.9335 - Roc Auc: 0.9335 - Accuracy: 0.9335 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | No log | 0.98 | 15 | 0.5221 | 0.7173 | 0.7173 | 0.7173 | | No log | 1.98 | 30 | 0.3833 | 0.7365 | 0.7308 | 0.7089 | | No log | 2.98 | 45 | 0.3204 | 0.9030 | 0.9012 | 0.8836 | | No log | 3.98 | 60 | 0.2861 | 0.8960 | 0.8960 | 0.8960 | | No log | 4.98 | 75 | 0.2223 | 0.9125 | 0.9127 | 0.9106 | | No log | 5.98 | 90 | 0.2499 | 0.9210 | 0.9210 | 0.9210 | | No log | 6.98 | 105 | 0.2168 | 0.9293 | 0.9293 | 0.9293 | | No log | 7.98 | 120 | 0.2122 | 0.9376 | 0.9376 | 0.9376 | | No log | 8.98 | 135 | 0.2303 | 0.9335 | 0.9335 | 0.9335 | | No log | 9.98 | 150 | 0.2455 | 0.9314 | 0.9314 | 0.9314 | | No log | 10.98 | 165 | 0.2278 | 0.9335 | 0.9335 | 0.9335 | | No log | 11.98 | 180 | 0.2593 | 0.9304 | 0.9304 | 0.9293 | | No log | 12.98 | 195 | 0.2494 | 0.9397 | 0.9397 | 0.9397 | | No log | 13.98 | 210 | 0.2579 | 0.9314 | 0.9314 | 0.9314 | | No log | 14.98 | 225 | 0.2633 | 0.9356 | 0.9356 | 0.9356 | | No log | 15.98 | 240 | 0.2918 | 0.9283 | 0.9283 | 0.9272 | | No log | 16.98 | 255 | 0.2714 | 0.9356 | 0.9356 | 0.9356 | | No log | 17.98 | 270 | 0.3034 | 0.9356 | 0.9356 | 0.9356 | | No log | 18.98 | 285 | 0.3050 | 0.9325 | 0.9324 | 0.9314 | | No log | 19.98 | 300 | 0.2997 | 0.9335 | 0.9335 | 0.9335 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
1ddfb22cf5559f38cf30bc23e80523b1
izumi-lab/electra-small-japanese-fin-discriminator
izumi-lab
electra
7
989
transformers
0
null
true
false
false
cc-by-sa-4.0
['ja']
null
null
0
0
0
0
0
0
0
['finance']
false
true
true
2,193
false
# ELECTRA small Japanese finance discriminator This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language. The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0). ## Model architecture The model architecture is the same as ELECTRA small in the [original ELECTRA implementation](https://github.com/google-research/electra); 12 layers, 256 dimensions of hidden states, and 4 attention heads. ## Training Data The models are trained on the Japanese version of Wikipedia. The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021. The Wikipedia corpus file is 2.9GB, consisting of approximately 20M sentences. The financial corpus consists of 2 corpora: - Summaries of financial results from October 9, 2012, to December 31, 2020 - Securities reports from February 8, 2018, to December 31, 2020 The financial corpus file is 5.2GB, consisting of approximately 27M sentences. ## Tokenization The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm. The vocabulary size is 32768. ## Training The models are trained with the same configuration as ELECTRA small in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555) except size; 128 tokens per instance, 128 instances per batch, and 1M training steps. The size of the generator is the same of the discriminator. ## Citation ``` @article{Suzuki-etal-2023-ipm, title = {Constructing and analyzing domain-specific language model for financial text mining} author = {Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi}, journal = {Information Processing & Management}, volume = {60}, number = {2}, pages = {103194}, year = {2023}, doi = {10.1016/j.ipm.2022.103194} } ``` ## Licenses The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/). ## Acknowledgments This work was supported by JSPS KAKENHI Grant Number JP21K12010.
f49e68e43be4fa5672c9f4984a909f63
Davincilee/closure_system_door_inne-bert-base-uncased
Davincilee
bert
16
4
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,135
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # closure_system_door_inne-bert-base-uncased This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7907 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 7 - eval_batch_size: 7 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7321 | 1.0 | 2 | 2.5801 | | 2.6039 | 2.0 | 4 | 2.0081 | | 2.4556 | 3.0 | 6 | 2.3329 | | 2.3587 | 4.0 | 8 | 2.4156 | | 2.2565 | 5.0 | 10 | 2.0009 | | 2.3489 | 6.0 | 12 | 1.7774 | | 2.2622 | 7.0 | 14 | 2.2064 | | 2.415 | 8.0 | 16 | 1.9671 | | 2.1873 | 9.0 | 18 | 2.0729 | | 2.2377 | 10.0 | 20 | 2.0052 | | 2.352 | 11.0 | 22 | 1.9614 | | 2.2347 | 12.0 | 24 | 2.2437 | | 2.1113 | 13.0 | 26 | 1.7145 | | 2.1939 | 14.0 | 28 | 1.5418 | | 2.0645 | 15.0 | 30 | 2.1882 | | 2.1499 | 16.0 | 32 | 2.0266 | | 2.1432 | 17.0 | 34 | 2.3583 | | 2.0656 | 18.0 | 36 | 2.3147 | | 2.0348 | 19.0 | 38 | 2.2807 | | 2.0502 | 20.0 | 40 | 1.7122 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
65323f1e82ea6fe32ccf168d0cb0be37
Shobhank-iiitdwd/Distiled-roberta-squad2-QA
Shobhank-iiitdwd
roberta
10
5
transformers
0
question-answering
true
false
false
cc-by-4.0
['en']
['squad_v2']
null
0
0
0
0
0
0
0
[]
true
true
true
2,491
false
# Distiled-roberta-squad2 This is the *distilled* version of the [roberta-base-squad2-QA](https://huggingface.co/Shobhank-iiitdwd/Distiled-roberta-squad2-QA) model. This model has a comparable prediction quality and runs at twice the speed of the base model. ## Overview **Language model:** Distiled-roberta-squad2-QA **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 ## Hyperparameters ``` batch_size = 96 n_epochs = 4 base_LM_model = "Shobhank-iiitdwd/Distiled-roberta-squad2-QA" max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride = 128 max_query_length = 64 distillation_loss_weight = 0.75 temperature = 1.5 teacher = "Shobhank-iiitdwd/Distiled-roberta-squad2-QA" ``` ## Distillation This model was distilled using the TinyBERT approach.Firstly, we have performed intermediate layer distillation with roberta-base as the teacher which resulted in Distiles-roberta. Secondly, we have performed task-specific distillation with [roberta-base-squad2](https://huggingface.co/Shobhank-iiitdwd/roberta-squad2-QA) as the teacher for further intermediate layer distillation on an augmented version of SQuADv2 and then with [roberta-large-squad2](https://huggingface.co/Shobhank-iiitdwd/Distiled-roberta-squad2-QA) as the teacher for prediction layer distillation. ## Usage ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "Shobhank-iiitdwd/Distiled-roberta-squad2-QA" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 78.69114798281817, "f1": 81.9198998536977, "total": 11873, "HasAns_exact": 76.19770580296895, "HasAns_f1": 82.66446878592329, "HasAns_total": 5928, "NoAns_exact": 81.17746005046257, "NoAns_f1": 81.17746005046257, "NoAns_total": 5945 ```
dea682c9d8496c8654344ceeffaff397
dranzerstar/SD-textual-inversion-embeddings-repo
dranzerstar
null
303
0
diffusers
47
null
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
1
1
0
['LoRa', 'embeddings']
false
true
true
1,974
false
### SD-textual-inversion-embeddings/Lora repo ### Lora Networks Still Exploring on this training process prompt: masterpiece, best_quality, clear details,1girl, cowboy_shot, simple_background with respective LoRa net ![](lora_samples.png) ![](lora_samples2.png) ![](lora_samples3.png) ![](lora_samples4.png) ### Lora characters and outfits using char-* and outfit-* togeather masterpiece, best_quality, clear details,1girl, reverse_outfit (pasties) (maebari) high_heels \<lora:outfit-reverseoutfit:1\>, (fullbody), looking_at_viewer, floor , \<lora:char-seia:0.9\>, ![](seia_outfit_sample.png) ---- ### Textual inversion embeddings ### stable diffusion emeddings of characters and outfits Check each image's PNG info inthe preview folder for excat gen params # Sample of shinymas/character embeddings generated with the same prompt with interchanging character phrase (char-X) prompt: masterpiece, best_quality, clear details, char-kogane ,shirt,1girl,upper body Negative prompt: fake_animal_ears, bad_prompt:0.8, (Cropped head), (Extra hands), (extra legs), (cropped), (missing legs), (duplicate), (morbid), cropped, (error), (bad anatomy), text, jpeg artifacts, (ugly), (morbid), (blurry), (low quality), (long leg), (poorly drawn), (bad proportions), ![](shinymas.png) # Sample of char-toru wearing various outfit embeddings generated with the same prompt with interchanging outfit phrase (char-X) prompt: masterpiece, best_quality, clear details, illustration of char-toru standing wearing outfit-null:1.1, ((full body)) , (smile),(solo), boots, floor, Negative prompt: fake_animal_ears, bad_prompt:0.8, (Cropped head), (Extra hands), (extra legs), (cropped), (missing legs), (duplicate), (morbid), cropped, (error), (bad anatomy), text, jpeg artifacts, (ugly), (morbid), (blurry), (low quality), (long leg), (poorly drawn), (bad proportions), ![](outfit.png) ![](RO-hina.png) ![](SFsample.png) ![](out1.png) --- license: osl-3.0 ---
1688bfd73b0a5f777c4e455de0bdb39a
jonatasgrosman/exp_w2v2t_et_unispeech-sat_s108
jonatasgrosman
unispeech-sat
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['et']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'et']
false
true
true
463
false
# exp_w2v2t_et_unispeech-sat_s108 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
4cd8c12037349cbb916ee69838e71d1a
teddy322/wav2vec2-large-xls-r-300m-kor-11385-3
teddy322
wav2vec2
15
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['zeroth_korean_asr']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,332
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-kor-11385-3 This model is a fine-tuned version of [teddy322/wav2vec2-large-xls-r-300m-kor-11385-2](https://huggingface.co/teddy322/wav2vec2-large-xls-r-300m-kor-11385-2) on the zeroth_korean_asr dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2425 - eval_wer: 0.1495 - eval_runtime: 137.8001 - eval_samples_per_second: 3.316 - eval_steps_per_second: 0.421 - epoch: 10.59 - step: 3600 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
10cc0e8c51d6e82de4d3b60fe7492322
valentinaw1sa4ajh/fusion-final
valentinaw1sa4ajh
null
18
2
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
431
false
### fusion-final Dreambooth model trained by valentinaw1sa4ajh with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
93c87c22aa92d8affcc3e8a6e0488d85
rhitabrat/bert-finetuned-squad
rhitabrat
bert
8
3
transformers
0
question-answering
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,433
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # rhitabrat/bert-finetuned-squad This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7887 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7790, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.2182 | 0 | | 0.7887 | 1 | ### Framework versions - Transformers 4.21.2 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
43710b6769d9ee4e83aed522b21a76db
MhF/xlm-roberta-base-finetuned-panx-de
MhF
xlm-roberta
15
21
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1354 - F1: 0.8621 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.254 | 1.0 | 525 | 0.1652 | 0.8254 | | 0.1293 | 2.0 | 1050 | 0.1431 | 0.8489 | | 0.0797 | 3.0 | 1575 | 0.1354 | 0.8621 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
96b56a574e1d0dff3beca77cfe200761
cl-tohoku/bert-base-japanese
cl-tohoku
bert
8
140,784
transformers
8
fill-mask
true
true
true
cc-by-sa-4.0
['ja']
['wikipedia']
null
0
0
0
0
0
0
0
[]
false
true
true
1,640
false
# BERT base Japanese (IPA dictionary) This is a [BERT](https://github.com/google-research/bert) model pretrained on texts in the Japanese language. This version of the model processes input texts with word-level tokenization based on the IPA dictionary, followed by the WordPiece subword tokenization. The codes for the pretraining are available at [cl-tohoku/bert-japanese](https://github.com/cl-tohoku/bert-japanese/tree/v1.0). ## Model architecture The model architecture is the same as the original BERT base model; 12 layers, 768 dimensions of hidden states, and 12 attention heads. ## Training Data The model is trained on Japanese Wikipedia as of September 1, 2019. To generate the training corpus, [WikiExtractor](https://github.com/attardi/wikiextractor) is used to extract plain texts from a dump file of Wikipedia articles. The text files used for the training are 2.6GB in size, consisting of approximately 17M sentences. ## Tokenization The texts are first tokenized by [MeCab](https://taku910.github.io/mecab/) morphological parser with the IPA dictionary and then split into subwords by the WordPiece algorithm. The vocabulary size is 32000. ## Training The model is trained with the same configuration as the original BERT; 512 tokens per instance, 256 instances per batch, and 1M training steps. ## Licenses The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 3.0](https://creativecommons.org/licenses/by-sa/3.0/). ## Acknowledgments For training models, we used Cloud TPUs provided by [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc/) program.
5c1fe3f53179cbd2cc42d50407441b58
LowGI/my_new_asr_model
LowGI
wav2vec2
10
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,514
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_new_asr_model This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.9912 - Wer: 0.9915 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | No log | 200.0 | 200 | 3.2498 | 0.9972 | | No log | 400.0 | 400 | 4.1645 | 1.1339 | | 1.1325 | 600.0 | 600 | 4.7252 | 1.1197 | | 1.1325 | 800.0 | 800 | 4.9678 | 1.0370 | | 0.0747 | 1000.0 | 1000 | 4.9912 | 0.9915 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
75c4c28f3a799bc0ef8d796c72aac4dc
huggingnft/dooggies
huggingnft
null
5
25
transformers
2
unconditional-image-generation
false
false
false
mit
null
['huggingnft/dooggies']
null
0
0
0
0
0
0
0
['huggingnft', 'nft', 'huggan', 'gan', 'image', 'images', 'unconditional-image-generation']
false
true
true
2,174
false
# Hugging NFT: dooggies ## Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. ## Model description LightWeight GAN model for unconditional generation. NFT collection available [here](https://opensea.io/collection/dooggies). Dataset is available [here](https://huggingface.co/datasets/huggingnft/dooggies). Check Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft). Project repository: [link](https://github.com/AlekseyKorshuk/huggingnft). [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft) ## Intended uses & limitations #### How to use Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft). #### Limitations and bias Check project repository: [link](https://github.com/AlekseyKorshuk/huggingnft). ## Training data Dataset is available [here](https://huggingface.co/datasets/huggingnft/dooggies). ## Training procedure Training script is available [here](https://github.com/AlekseyKorshuk/huggingnft). ## Generated Images Check results with Space: [link](https://huggingface.co/spaces/AlekseyKorshuk/huggingnft). ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingnft?style=social)](https://github.com/AlekseyKorshuk/huggingnft) ### BibTeX entry and citation info ```bibtex @InProceedings{huggingnft, author={Aleksey Korshuk} year=2022 } ```
8758306c3733bbcb1de82a68d16d6a06
Helsinki-NLP/opus-mt-fr-sl
Helsinki-NLP
marian
10
16
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-fr-sl * source languages: fr * target languages: sl * OPUS readme: [fr-sl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-sl/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sl/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-sl/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.fr.sl | 20.1 | 0.433 |
daf55cdc110cb7b76d994ab8d17e4756
facebook/regnet-y-10b-seer-in1k
facebook
regnet
14
8
transformers
1
image-classification
true
true
false
apache-2.0
null
['imagenet1k']
null
1
0
1
0
0
0
0
['vision', 'image-classification']
false
true
true
1,411
false
## RegNetY 10B This gigantic model is a scale up [RegNetY](https://arxiv.org/abs/2003.13678) model trained on one bilion random images ad later finetuned on imagenet. Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python >>> from transformers import AutoFeatureExtractor, RegNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-040") >>> model = RegNetForImageClassification.from_pretrained("facebook/regnet-y-040") >>> inputs = feature_extractor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> # model predicts one of the 1000 ImageNet classes >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) 'tabby, tabby cat' ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
7524d11e731a1a0b9454982647cff4a6
thatdramebaazguy/roberta-base-MITmovie-squad
thatdramebaazguy
roberta
10
8
transformers
1
question-answering
true
true
true
cc-by-4.0
['English']
['MIT Movie', 'SQuAD']
null
1
1
0
0
0
0
0
['roberta', 'roberta-base', 'question-answering', 'qa', 'movies']
false
true
true
1,538
false
# roberta-base + Task Transfer (NER) --> Domain-Specific QA Objective: This is Roberta Base without any Domain Adaptive Pretraining --> Then trained for the NER task using MIT Movie Dataset --> Then a changed head to do the SQuAD Task. This makes a QA model capable of answering questions in the movie domain, with additional information coming from a different task (NER - Task Transfer). https://huggingface.co/thatdramebaazguy/roberta-base-MITmovie was used as the Roberta Base + NER model. ``` model_name = "thatdramebaazguy/roberta-base-MITmovie-squad" pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="question-answering") ``` ## Overview **Language model:** roberta-base **Language:** English **Downstream-task:** NER --> QA **Training data:** MIT Movie, SQuADv1 **Eval data:** MoviesQA (From https://github.com/ibm-aur-nlp/domain-specific-QA) **Infrastructure**: 4x Tesla v100 **Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/scripts/shell_scripts/movieR_NER_squad.sh) ## Hyperparameters ``` Num examples = 88567 Num Epochs = 3 Instantaneous batch size per device = 32 Total train batch size (w. parallel, distributed & accumulation) = 128 ``` ## Performance ### Eval on MoviesQA - eval_samples = 5032 - exact_match = 55.80286 - f1 = 70.31451 ### Eval on SQuADv1 - exact_match = 85.6859 - f1 = 91.96064 Github Repo: - [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/) ---
b2557ee2dec2b7b1d840ca77f3bd0142
vuiseng9/wav2vec2-base-100h
vuiseng9
wav2vec2
8
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['en']
['librispeech_asr']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition']
false
true
true
1,881
false
# Wav2Vec2-Base-100h This is a fork of [```facebook/wav2vec2-base-100h```](https://huggingface.co/facebook/wav2vec2-base-100h) ### Changes & Notes 1. Document reproducible evaluation (below) to new transformer and datasets version. 2. Use batch size of 1 to reproduce results. 3. Validated with ```transformers v4.15.0```, ```datasets 1.18.0``` 4. You may need to manually install pypkg ```librosa```, ```jiwer``` ## Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-base-100h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import soundfile as sf import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # librispeech_eval = load_dataset("librispeech_asr", "other", split="test") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h") def map_to_array(batch): # speech, _ = sf.read(batch["file"]) # batch["speech"] = speech batch["speech"] = batch['audio']['array'] return batch librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): input_values = processor(batch["speech"], return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean/test" | "other/test" | |--------------| ------------| | 6.1 | 13.5 |
1626dd240f773f3228cadac02f426e31
KoboldAI/GPT-J-6B-Janeway
KoboldAI
gptj
10
1,700
transformers
1
text-generation
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,786
false
# GPT-J 6B - Janeway ## Model Description GPT-J 6B-Janeway is a finetune created using EleutherAI's GPT-J 6B model. ## Training data The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres. Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Janeway') >>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50) [{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}] ``` ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model uses the following model as base: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
9243cd47470bdc8c9a3e7ad684b891c6
jkhan447/HateXplain-third-annotator
jkhan447
bert
13
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,018
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HateXplain-third-annotator This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8016 - Accuracy: 0.5913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
dceb09aec65479f53655ec36ada6fa76
ydshieh/clip-vit-base-patch32
ydshieh
clip
4
5
transformers
1
summarization
false
true
false
apache-2.0
['en']
['scientific_papers']
null
1
1
0
0
0
0
0
['summarization']
true
true
true
2,756
false
# BigBirdPegasus model (large) BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. BigBird was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-pubmed") # by default encoder-attention is `block_sparse` with num_random_blocks=3, block_size=64 model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-pubmed") # decoder attention type can't be changed & will be "original_full" # you can change `attention_type` (encoder only) to full attention like this: model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-pubmed", attention_type="original_full") # you can change `block_size` & `num_random_blocks` like this: model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-pubmed", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." inputs = tokenizer(text, return_tensors='pt') prediction = model.generate(**inputs) prediction = tokenizer.batch_decode(prediction) ``` ## Training Procedure This checkpoint is obtained after fine-tuning `BigBirdPegasusForConditionalGeneration` for **summarization** on **pubmed dataset** from [scientific_papers](https://huggingface.co/datasets/scientific_papers). ## BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
bb415ce90ec6fe2d0cc5d2c91457de55
flax-community/wav2vec2-german
flax-community
wav2vec2
9
4
transformers
0
null
false
false
false
apache-2.0
['de']
['librispeech_asr']
null
0
0
0
0
0
0
0
['speech']
false
true
true
2,537
false
# Wav2Vec2-german model [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. ## Necessary installations: - sndfile library: `sudo apt-get install libsndfile1-dev` - ffmpeg: `sudo apt install ffmpeg` & `pip install ffmpeg` ## Model description `TODO: Update` ## How to use `TODO: Update` ```python from transformers import FlaxWav2Vec2Processor, TFWav2Vec2Model model_id = "flax-community/wav2vec2-german" from datasets import load_dataset import soundfile as sf processor = Wav2Vec2Processor.from_pretrained(model_id) model = TFWav2Vec2Model.from_pretrained(model_id) def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) input_values = processor(ds["speech"][0], return_tensors="flax").input_values # Batch size 1 hidden_states = model(input_values).last_hidden_state ``` ## Training Data `TODO: Update` ## Training Procedure `TODO: Update`
57425f440c6e8bbb30fe9c13a5d7a819
ericRosello/distilbert-base-uncased-finetuned-squad-frozen-v2
ericRosello
distilbert
12
24
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,355
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2104 ## Model description Most base model weights were frozen leaving only to finetune the last layer (qa outputs) and 3 last layers of the encoder. ## Training and evaluation data Achieved EM: 73.519394512772, F1: 82.71779517079237 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.3937 | 1.0 | 5533 | 1.2915 | | 1.1522 | 2.0 | 11066 | 1.2227 | | 1.0055 | 3.0 | 16599 | 1.2104 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
01fb0cf89a5b20179ffb166224033954
Geotrend/bert-base-ur-cased
Geotrend
bert
8
22
transformers
0
fill-mask
true
true
true
apache-2.0
['ur']
['wikipedia']
null
0
0
0
0
0
0
0
[]
false
true
true
1,283
false
# bert-base-ur-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-ur-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-ur-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact amine@geotrend.fr for any question, feedback or request.
8e5f1d00e628b6c824fae47dde6e8e13
Lancelot53/try1
Lancelot53
vit
7
0
transformers
0
image-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,590
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # try1 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6966 - Precision: 0.4569 - Recall: 0.4569 - F1: 0.4569 - Pf1: 0.0597 - Accuracy: 0.4569 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Pf1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:------:|:--------:| | 0.704 | 1.72 | 100 | 0.6920 | 0.5409 | 0.5409 | 0.5409 | 0.6979 | 0.5409 | | 0.6924 | 3.45 | 200 | 0.6966 | 0.4569 | 0.4569 | 0.4569 | 0.0597 | 0.4569 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
91d95addedd453d886c8244651af472b
mlagrand/xlm-roberta-base-finetuned-panx-de
mlagrand
xlm-roberta
18
11
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,109
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.01 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:---:| | No log | 0.01 | 6 | 1.0252 | 0.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
0dd3992273a25c00ec9072c1818e5ba0
kosec39/marian-finetuned-kde4-en-to-fr
kosec39
marian
14
3
transformers
0
translation
true
false
false
apache-2.0
null
['kde4']
null
0
0
0
0
0
0
0
['translation', 'generated_from_trainer']
true
true
true
1,075
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.8560 - Bleu: 52.8311 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
94ac41d2c40d8e1545cb76628374fe1d
jonatasgrosman/exp_w2v2t_pt_no-pretraining_s84
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'pt']
false
true
true
413
false
# exp_w2v2t_pt_no-pretraining_s84 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
a50d9dc8091540455a3d6102f421e609
okite97/xlm-roberta-base-finetune-panx-de
okite97
xlm-roberta
11
26
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,318
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetune-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1405 - F1: 0.8611 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2542 | 1.0 | 787 | 0.1788 | 0.8083 | | 0.1307 | 2.0 | 1574 | 0.1371 | 0.8488 | | 0.0784 | 3.0 | 2361 | 0.1405 | 0.8611 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
d64e438583ff5b49b47766ff0fd68ccb
helgespieker/ddpm-butterflies-128
helgespieker
null
11
0
diffusers
0
null
false
false
false
apache-2.0
['en']
['huggan/smithsonian_butterflies_subset']
null
0
0
0
0
0
0
0
[]
false
true
true
1,234
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/helgespieker/ddpm-butterflies-128/tensorboard?#scalars)
ca6d9d688e7c5802518319ed731bd643
WillHeld/t5-small-pointer-mtop
WillHeld
mt5
37
3
transformers
0
text2text-generation
true
false
false
apache-2.0
['en']
['mtop']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,189
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-pointer-mtop This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the mtop dataset. It achieves the following results on the evaluation set: - Loss: 0.1202 - Exact Match: 0.7445 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match | |:-------------:|:-----:|:----:|:---------------:|:-----------:| | 2.1451 | 6.65 | 200 | 0.5966 | 0.0134 | | 0.4695 | 13.33 | 400 | 0.2264 | 0.2998 | | 0.2229 | 19.98 | 600 | 0.1446 | 0.4649 | | 0.1389 | 26.65 | 800 | 0.1227 | 0.5154 | | 0.097 | 33.33 | 1000 | 0.1213 | 0.5221 | | 0.0724 | 39.98 | 1200 | 0.1202 | 0.5365 | | 0.0562 | 46.65 | 1400 | 0.1207 | 0.5436 | | 0.0457 | 53.33 | 1600 | 0.1240 | 0.5441 | | 0.0399 | 59.98 | 1800 | 0.1349 | 0.5441 | | 0.0317 | 66.65 | 2000 | 0.1369 | 0.5477 | | 0.0271 | 73.33 | 2200 | 0.1409 | 0.5490 | | 0.0237 | 79.98 | 2400 | 0.1462 | 0.5539 | | 0.0207 | 86.65 | 2600 | 0.1470 | 0.5517 | | 0.0188 | 93.33 | 2800 | 0.1505 | 0.5508 | | 0.0174 | 99.98 | 3000 | 0.1505 | 0.5512 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
3614dd54be80da83874a46194deb7e3d
PiyarSquare/sd_asim_simpsons
PiyarSquare
null
13
0
null
24
null
false
false
false
creativeml-openrail-m
null
null
null
2
2
0
0
1
1
0
[]
false
true
true
4,186
false
### 💥🎨 The Simpsons dreambooth model. This is a fine-tuned Stable Diffusion model based on The Simpsons. Use **asim style** in your prompts. The model has some trouble with double pupils and no pupils. Using "cross-eyed" in the negative prompt appears to help? ### Sample images: Samples are made with [dynamic prompts](https://github.com/adieyal/sd-dynamic-prompts), Euler 80 steps @ CFG 12. Negative prompts: watermark, text, signature, cross-eyed ![asim.jpg](https://huggingface.co/PiyarSquare/sd_asim_simpsons/resolve/main/grid_famous_people.png) ![asim.jpg](https://huggingface.co/PiyarSquare/sd_asim_simpsons/resolve/main/grid_famous_people2.png) ![asim.jpg](https://huggingface.co/PiyarSquare/sd_asim_simpsons/resolve/main/grid_characters.png) For people / characters: asim style. dramatic beautiful { headshot | portrait } of \_\_person\_\_ {outside { in a garden | in a desert | on a mountain top | at a roman ruin} {at sunrise | at sunset | on an overcast afternoon | in the rain | in the snow | at night} | inside {a fancy living room | on a movie set | a vast empty dark space | a kaleidoscope | an ancient library} with {spotlights | neon lights | soft mood lighting | firefly lights } }. detailed background. ![asim.jpg](https://huggingface.co/PiyarSquare/sd_asim_simpsons/resolve/main/grid_animals.png) For animals: asim style. dramatic closeup national geographic image of a \_\_animal\_\_ in its natural habitat. at {sunrise|sunset|night}. detailed background. ![asim.jpg](https://huggingface.co/PiyarSquare/sd_asim_simpsons/resolve/main/grid_buildings.png) asim style. + random prompt from the internet of cool looking structures: steampunk library, tower of babel, tree house, haunted victorian. ![asim.jpg](https://huggingface.co/PiyarSquare/sd_asim_simpsons/resolve/main/grid_landscapes.png) ![asim.jpg](https://huggingface.co/PiyarSquare/sd_asim_simpsons/resolve/main/grid_famous_places.png) biomes: asim style. a beautiful {summer | autumn | winter | spring } landscape panorama painting of \_\_biome\_\_ {at sunrise | at sunset | on an overcast afternoon | in the rain | in the snow | at night} famous places: asim style. a beautiful panorama view of \_\_places\_\_ {at sunrise | at sunset | on a cloudy afternoon | in the rain | covered in snow}. ![asim.jpg](https://huggingface.co/PiyarSquare/sd_asim_simpsons/resolve/main/grid_flowers.png) flowers: asim style. a beautiful vase of \_\_flower\_\_ flowers. on a balcony table at { sunrise | sunset | night} . nearby a {bottle of {beer | wine} and a half-empty glass | bowl of fruit}. ![asim.jpg](https://huggingface.co/PiyarSquare/sd_asim_simpsons/resolve/main/grid_internet_examples.png) ![asim.jpg](https://huggingface.co/PiyarSquare/sd_asim_simpsons/resolve/main/grid_internet_examples2.png) asim style. + random prompt from the internet. The model mixes well with existing prompts with artists and styles, though not so well with keywords like "photo-realistic." Based on StableDiffusion 1.5 model (full weights). ### Training Made with [automatic1111 webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) + [d8ahazard dreambooth extension](https://github.com/d8ahazard/sd_dreambooth_extension) + [nitrosocke guide](https://github.com/nitrosocke/dreambooth-training-guide). 100 hand-cut training images. About 70% people, 20% landscapes and 10% animals and objects. Maybe one too many Cletus. Detailed captions were written for each image such as: "A wide shot of a 40-year-old Caucasian man with glasses and a mustache. Dressed in a fishing hat, pink shirt, an olive fishing vest with pockets and brown trousers, sitting in a canoe on a lake. The man is fishing with a red fishing rod. There are trees and mountains in the background at sunset with a few clouds in the sky." Learning rate was 1.72e-6 for 10,000 steps without prior preservation. Useful tips from the reddit stablediffusion and the discussions on d8ahazard's extension. Notes on training on [d8ahazard dreambooth extension discussion](https://github.com/d8ahazard/sd_dreambooth_extension/discussions/443). I am excited to see what people do with this and I would like to improve the eyes, if anyone has suggestions.
de507e9ffa55b3583f0c79b768be549a
bthomas/article2KW_test1.3b_barthez-orangesum-title_finetuned_for_summerization
bthomas
mbart
10
4
transformers
0
summarization
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['summarization', 'generated_from_trainer']
true
true
true
1,592
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # article2KW_test1.3b_barthez-orangesum-title_finetuned_for_summerization This model is a fine-tuned version of [moussaKam/barthez-orangesum-title](https://huggingface.co/moussaKam/barthez-orangesum-title) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2702 - Rouge1: 0.2711 - Rouge2: 0.0683 - Rougel: 0.2714 - Rougelsum: 0.2718 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 1.7922 | 1.0 | 1036 | 1.4273 | 0.2704 | 0.0752 | 0.2711 | 0.2721 | | 1.3346 | 2.0 | 2072 | 1.3165 | 0.2555 | 0.0610 | 0.2550 | 0.2564 | | 1.1571 | 3.0 | 3108 | 1.2702 | 0.2711 | 0.0683 | 0.2714 | 0.2718 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.11.0
8e6bdc8f8b0789989aba83ea5c57e250
ankurani/albert-base-v2-finetuned-ner
ankurani
albert
9
9
transformers
0
token-classification
true
false
false
apache-2.0
null
['plod-filtered']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,438
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-ner This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the plod-filtered dataset. It achieves the following results on the evaluation set: - Loss: 0.0319 - Precision: 0.9890 - Recall: 0.9881 - F1: 0.9886 - Accuracy: 0.9884 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0649 | 1.0 | 3018 | 0.0471 | 0.9838 | 0.9814 | 0.9826 | 0.9818 | | 0.0442 | 2.0 | 6036 | 0.0319 | 0.9890 | 0.9881 | 0.9886 | 0.9884 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
93b09eb01b4e107aca77f4bcc76bc2b1
nbroad/rob-base-superqa2
nbroad
roberta
22
5
transformers
0
question-answering
true
false
false
mit
null
['squad_v2', 'quoref', 'adversarial_qa', 'duorc']
null
5
2
3
0
0
0
0
['generated_from_trainer']
true
true
true
1,056
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rob-base-superqa2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 256 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.11.0a0+gita4c10ee - Datasets 2.4.0 - Tokenizers 0.12.1
bdb4e9c0c2423b3e30e6c4791b6acac1
sd-concepts-library/rail-scene
sd-concepts-library
null
9
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,020
false
### Rail Scene on Stable Diffusion This is the `<rail-pov>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<rail-pov> 0](https://huggingface.co/sd-concepts-library/rail-scene/resolve/main/concept_images/3.jpeg) ![<rail-pov> 1](https://huggingface.co/sd-concepts-library/rail-scene/resolve/main/concept_images/0.jpeg) ![<rail-pov> 2](https://huggingface.co/sd-concepts-library/rail-scene/resolve/main/concept_images/1.jpeg) ![<rail-pov> 3](https://huggingface.co/sd-concepts-library/rail-scene/resolve/main/concept_images/2.jpeg)
37cdbd4115c08e431a26949c86ff5959
Rahul-AppOrchid/detr-base-sroie
Rahul-AppOrchid
detr
9
2
transformers
0
object-detection
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
926
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-base-sroie This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
9f95d4f9f490a1a5a06bca0e67213de1
jonatasgrosman/wav2vec2-large-xlsr-53-russian
jonatasgrosman
wav2vec2
24
2,380
transformers
11
automatic-speech-recognition
true
false
true
apache-2.0
['ru']
['common_voice', 'mozilla-foundation/common_voice_6_0']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'robust-speech-event', 'ru', 'speech', 'xlsr-fine-tuning-week']
true
true
true
4,616
false
# Fine-tuned XLSR-53 large model for speech recognition in Russian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Russian using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-russian") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "ru" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-russian" SAMPLES = 5 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | ОН РАБОТАТЬ, А ЕЕ НЕ УДЕРЖАТЬ НИКАК — БЕГАЕТ ЗА КЛЁШЕМ КАЖДОГО БУЛЬВАРНИКА. | ОН РАБОТАТЬ А ЕЕ НЕ УДЕРЖАТ НИКАК БЕГАЕТ ЗА КЛЕШОМ КАЖДОГО БУЛЬБАРНИКА | | ЕСЛИ НЕ БУДЕТ ВОЗРАЖЕНИЙ, Я БУДУ СЧИТАТЬ, ЧТО АССАМБЛЕЯ СОГЛАСНА С ЭТИМ ПРЕДЛОЖЕНИЕМ. | ЕСЛИ НЕ БУДЕТ ВОЗРАЖЕНИЙ Я БУДУ СЧИТАТЬ ЧТО АССАМБЛЕЯ СОГЛАСНА С ЭТИМ ПРЕДЛОЖЕНИЕМ | | ПАЛЕСТИНЦАМ НЕОБХОДИМО СНАЧАЛА УСТАНОВИТЬ МИР С ИЗРАИЛЕМ, А ЗАТЕМ ДОБИВАТЬСЯ ПРИЗНАНИЯ ГОСУДАРСТВЕННОСТИ. | ПАЛЕСТИНЦАМ НЕОБХОДИМО СНАЧАЛА УСТАНОВИТЬ С НИ МИР ФЕЗРЕЛЕМ А ЗАТЕМ ДОБИВАТЬСЯ ПРИЗНАНИЯ ГОСУДАРСТВЕНСКИ | | У МЕНЯ БЫЛО ТАКОЕ ЧУВСТВО, ЧТО ЧТО-ТО ТАКОЕ ОЧЕНЬ ВАЖНОЕ Я ПРИБАВЛЯЮ. | У МЕНЯ БЫЛО ТАКОЕ ЧУВСТВО ЧТО ЧТО-ТО ТАКОЕ ОЧЕНЬ ВАЖНОЕ Я ПРЕДБАВЛЯЕТ | | ТОЛЬКО ВРЯД ЛИ ПОЙМЕТ. | ТОЛЬКО ВРЯД ЛИ ПОЙМЕТ | | ВРОНСКИЙ, СЛУШАЯ ОДНИМ УХОМ, ПЕРЕВОДИЛ БИНОКЛЬ С БЕНУАРА НА БЕЛЬ-ЭТАЖ И ОГЛЯДЫВАЛ ЛОЖИ. | ЗЛАЗКИ СЛУШАЮ ОТ ОДНИМ УХАМ ТЫ ВОТИ В ВИНОКОТ СПИЛА НА ПЕРЕТАЧ И ОКЛЯДЫВАЛ БОСУ | | К СОЖАЛЕНИЮ, СИТУАЦИЯ ПРОДОЛЖАЕТ УХУДШАТЬСЯ. | К СОЖАЛЕНИЮ СИТУАЦИИ ПРОДОЛЖАЕТ УХУЖАТЬСЯ | | ВСЁ ЖАЛОВАНИЕ УХОДИЛО НА ДОМАШНИЕ РАСХОДЫ И НА УПЛАТУ МЕЛКИХ НЕПЕРЕВОДИВШИХСЯ ДОЛГОВ. | ВСЕ ЖАЛОВАНИЕ УХОДИЛО НА ДОМАШНИЕ РАСХОДЫ И НА УПЛАТУ МЕЛКИХ НЕ ПЕРЕВОДИВШИХСЯ ДОЛГОВ | | ТЕПЕРЬ ДЕЛО, КОНЕЧНО, ЗА ТЕМ, ЧТОБЫ ПРЕВРАТИТЬ СЛОВА В ДЕЛА. | ТЕПЕРЬ ДЕЛАЮ КОНЕЧНО ЗАТЕМ ЧТОБЫ ПРЕВРАТИТЬ СЛОВА В ДЕЛА | | ДЕВЯТЬ | ЛЕВЕТЬ | ## Evaluation 1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-russian --dataset mozilla-foundation/common_voice_6_0 --config ru --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-russian --dataset speech-recognition-community-v2/dev_data --config ru --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-russian, title={Fine-tuned {XLSR}-53 large model for speech recognition in {R}ussian}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-russian}}, year={2021} } ```
8eeddeb6105d639710248aee76b83e59
echo840/ddpm-butterflies-128
echo840
null
13
2
diffusers
0
null
false
false
false
apache-2.0
['en']
['huggan/smithsonian_butterflies_subset']
null
0
0
0
0
0
0
0
[]
false
true
true
1,229
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/echo840/ddpm-butterflies-128/tensorboard?#scalars)
8bd5605645e78b0c1e2deecc18a45709
agudelozc/distilroberta-base-mrpc-glu-cristian-agudelo
agudelozc
roberta
15
9
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,332
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-mrpc-glu-cristian-agudelo This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.9131 - Accuracy: 0.8211 - F1: 0.8713 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.285 | 1.09 | 500 | 0.8959 | 0.8407 | 0.8845 | | 0.2653 | 2.18 | 1000 | 0.9131 | 0.8211 | 0.8713 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
8e8e0422ab9d6f176ab01427bb839467
SuperAI2-Machima/mt5-small-thai-yes-no-qg
SuperAI2-Machima
mt5
9
5
transformers
0
text2text-generation
true
false
false
mit
['thai', 'th']
['NSC2018', 'wiki-documents-nsc', 'ThaiQACorpus-DevelopmentDataset']
null
1
1
0
0
0
0
0
['Yes No question-generation']
false
true
true
1,251
false
[SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/) [Google's mT5](https://github.com/google-research/multilingual-t5) , [Pollawat](https://huggingface.co/Pollawat/mt5-small-thai-qg) ```python from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config model = T5ForConditionalGeneration.from_pretrained('SuperAI2-Machima/mt5-small-thai-yes-no-qg') tokenizer = T5Tokenizer.from_pretrained('SuperAI2-Machima/mt5-small-thai-yes-no-qg') source_text = 'บุกยึดไม้เถื่อน อดีต ส.ส.บุรีรัมย์ เตรียมสร้างคฤหาสน์ทรงไทย 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น' print('Predicted Summary Text : ') tokenized_text = tokenizer.encode(source_text, return_tensors="pt").to(device) summary_ids = model.generate(tokenized_text, num_beams=4, no_repeat_ngram_size=2, max_length=50, early_stopping=True) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) #Predicted Summary Text : #answer: 80 แผ่น question: ตํารวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่ากี่แผ่น ```
461ec07ba1799346487cc9327229d902
GItaf/GPT2-CLS-Finetuned-MBTI-gpt2-mc-weight0.25-epoch5-CLS-ppl
GItaf
gpt2
10
7
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
899
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT2-CLS-Finetuned-MBTI-gpt2-mc-weight0.25-epoch5-CLS-ppl This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
d3ad00fa7b6e1dc1184986be09d94d11
stevemobs/deberta-base-combined-squad1-aqa-newsqa-50-and-newsqa-50
stevemobs
deberta
15
10
transformers
0
question-answering
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,300
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-combined-squad1-aqa-newsqa-50-and-newsqa-50 This model is a fine-tuned version of [stevemobs/deberta-base-combined-squad1-aqa-newsqa-50](https://huggingface.co/stevemobs/deberta-base-combined-squad1-aqa-newsqa-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4881 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.6957 | 1.0 | 8681 | 0.5072 | | 0.4264 | 2.0 | 17362 | 0.4881 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
26b9d7e5a488177c5226f56f45009436
google/t5-efficient-large-dl2
google
t5
12
7
transformers
0
text2text-generation
true
true
true
apache-2.0
['en']
['c4']
null
0
0
0
0
0
0
0
['deep-narrow']
false
true
true
6,253
false
# T5-Efficient-LARGE-DL2 (Deep-Narrow version) T5-Efficient-LARGE-DL2 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. ## Details model architecture This model checkpoint - **t5-efficient-large-dl2** - is of model type **Large** with the following variations: - **dl** is **2** It has **368.53** million parameters and thus requires *ca.* **1474.11 MB** of memory in full precision (*fp32*) or **737.05 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | #Params| | ----| ---- | ---- | ---- | ---- | ---- | ----| | Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M| | Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M| | Small | 6/6 | 2048 | 512 | 32 | 8 | 60M| | Base | 12/12 | 3072 | 768 | 64 | 12 | 220M| | Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M| | Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B| | XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B| whereas the following abbreviations are used: | Abbreviation | Definition | | ----| ---- | | nl | Number of transformer blocks (depth) | | dm | Dimension of embedding vector (output vector of transformers block) | | kv | Dimension of key/value projection matrix | | nh | Number of attention heads | | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) | | el | Number of transformer blocks in the encoder (encoder depth) | | dl | Number of transformer blocks in the decoder (decoder depth) | | sh | Signifies that attention heads are shared | | skv | Signifies that key-values projection matrices are tied | If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*. ## Pre-Training The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using the span-based masked language modeling (MLM) objective. ## Fine-Tuning **Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage. The checkpoint was pretrained in English and is therefore only useful for English NLP tasks. You can follow on of the following examples on how to fine-tune the model: *PyTorch*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) - [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *Tensorflow*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *JAX/Flax*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. ## Downstream Performance TODO: Add table if available ## Computational Complexity TODO: Add table if available ## More information We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint. As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv* model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
5b209b3d916130c21a79fed8880905df
fathyshalab/all-roberta-large-v1-work-8-16-5
fathyshalab
roberta
11
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,509
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-work-8-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3586 - Accuracy: 0.3689 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8058 | 1.0 | 1 | 2.6169 | 0.2356 | | 2.3524 | 2.0 | 2 | 2.5215 | 0.2978 | | 1.9543 | 3.0 | 3 | 2.4427 | 0.3422 | | 1.5539 | 4.0 | 4 | 2.3874 | 0.36 | | 1.4133 | 5.0 | 5 | 2.3586 | 0.3689 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
dc1ef52850a3f801debabbd73e55206a
gagan3012/model
gagan3012
gpt2
22
4
transformers
0
text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
970
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.6250 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
4c6bf76909bcd2f72f2d8935719af020
yip-i/colab-demo
yip-i
wav2vec2
17
2
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,041
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # colab-demo This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9910 - Wer: 0.9714 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1212 | 2.14 | 500 | 3.6706 | 1.0757 | | 0.2303 | 4.27 | 1000 | 2.6849 | 1.0578 | | 0.3003 | 6.41 | 1500 | 3.2261 | 1.0605 | | 0.2705 | 8.55 | 2000 | 3.3483 | 1.0844 | | 0.2178 | 10.68 | 2500 | 3.2000 | 1.0219 | | 0.1875 | 12.82 | 3000 | 2.2454 | 1.0159 | | 0.1792 | 14.96 | 3500 | 2.7510 | 0.9973 | | 0.1477 | 17.09 | 4000 | 2.6716 | 0.9847 | | 0.1232 | 19.23 | 4500 | 2.5939 | 0.9807 | | 0.1051 | 21.37 | 5000 | 3.3308 | 0.9794 | | 0.0847 | 23.5 | 5500 | 3.3430 | 0.9814 | | 0.0809 | 25.64 | 6000 | 3.2566 | 0.9595 | | 0.0642 | 27.78 | 6500 | 3.6392 | 0.9654 | | 0.0566 | 29.91 | 7000 | 3.9910 | 0.9714 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 1.18.3 - Tokenizers 0.13.1
b436442cfc5918322e4d1b623d389d5e
nst-sat/GlossBERT-finetunedTEST
nst-sat
bert
8
4
transformers
0
fill-mask
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,450
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nst-sat/GlossBERT-finetunedTEST This model is a fine-tuned version of [kanishka/GlossBERT](https://huggingface.co/kanishka/GlossBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 8.1065 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -375, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 8.1065 | 0 | ### Framework versions - Transformers 4.21.2 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
737cec1de13474600f372eb1afbfdaae
anas-awadalla/bart-base-few-shot-k-32-finetuned-squad-seq2seq-seed-0
anas-awadalla
bart
18
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
966
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-few-shot-k-32-finetuned-squad-seq2seq-seed-0 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
16d0972d4d3c4789fcc91eaa8d97a437
rosamondthalken/t5-base-sci-names
rosamondthalken
t5
4
1
transformers
0
text2text-generation
true
false
false
cc-by-sa-4.0
['en']
null
null
0
0
0
0
0
0
0
['scientific names', 'text generation']
false
true
true
1,391
false
# t5-base-sci-names Biodiversity literature is dedicated to the identification, documentation, and categorization of plants, fungi, animals, and other living organisms. Correctly extracting the name of an organism within these documents involves finding the entire scientific name–including the genus, specific epithet, and author name. Extracting these names allows biologists to access documents about a species more comprehensively, and to track an organism’s history of documentation, which includes biological changes and changes in how scientists describe them. **t5-base-sci-names** uses advances in text-to-text generation to generate scientific names and authors from biodiversity literature. This model was trained on hand-labeled biodiversity texts, including labeled information about a mentioned organism's genus (abbreviated and expanded), specific epithet, and author. This model was trained to output 0-N scientific names with specific prefixes (e.g. "genus = " or "epithet = ") and performs best with anywhere from 20-120 words. You can also use the model in this tutorial for [scientific names generation](https://colab.research.google.com/drive/1GEpnCaMJYiPIhuZiDJ1X1pZsGtGSm8Ds?usp=sharing). Thanks to Damon Little and Nelson Salinas at the New York Botanical Gardens for their support. *Note that this model is still a work in progress. Any feedback is welcome.*
4804be668d4b30f95cd10dd0655fbb2e
apple/ane-distilbert-base-uncased-finetuned-sst-2-english
apple
distilbert
13
53
transformers
4
text-classification
true
false
false
apache-2.0
['en']
['sst2']
null
2
0
2
0
0
0
0
[]
false
true
true
2,264
false
# DistilBERT optimized for Apple Neural Engine This is the [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) model, optimized for the Apple Neural Engine (ANE) as described in the article [Deploying Transformers on the Apple Neural Engine](https://machinelearning.apple.com/research/neural-engine-transformers). The source code is taken from Apple's [ml-ane-transformers](https://github.com/apple/ml-ane-transformers) GitHub repo, modified slightly to make it usable from the 🤗 Transformers library. For more details about DistilBERT, we encourage users to check out [this model card](https://huggingface.co/distilbert-base-uncased). ## How to use Usage example: ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer model_checkpoint = "apple/ane-distilbert-base-uncased-finetuned-sst-2-english" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForSequenceClassification.from_pretrained( model_checkpoint, trust_remote_code=True, return_dict=False, ) inputs = tokenizer( ["The Neural Engine is really fast"], return_tensors="pt", max_length=128, padding="max_length", ) with torch.no_grad(): outputs = model(**inputs) ``` ## Using the model with Core ML PyTorch does not utilize the ANE, and running this version of the model with PyTorch on the CPU or GPU may actually be slower than the original. To take advantage of the hardware acceleration of the ANE, use the Core ML version of the model, **DistilBERT_fp16.mlpackage**. Core ML usage example from Python: ```python import coremltools as ct mlmodel = ct.models.MLModel("DistilBERT_fp16.mlpackage") inputs = tokenizer( ["The Neural Engine is really fast"], return_tensors="np", max_length=128, padding="max_length", ) outputs_coreml = mlmodel.predict({ "input_ids": inputs["input_ids"].astype(np.int32), "attention_mask": inputs["attention_mask"].astype(np.int32), }) ``` To use the model from Swift, you will need to tokenize the input yourself according to the BERT rules. You can find a Swift implementation of the [BERT tokenizer here](https://github.com/huggingface/swift-coreml-transformers).
7dfe5f6b176e564928d6aa26f13df4f9
Joblift/jobBERTA-german-QA
Joblift
distilbert
28
15
transformers
0
question-answering
true
false
false
apache-2.0
null
['germanquad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,027
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # jobBERTA_german_QA This model is a fine-tuned version of [Joblift/distilbert-base-german-cased-finetuned-jl](https://huggingface.co/Joblift/distilbert-base-german-cased-finetuned-jl) on the germanquad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu102 - Datasets 2.8.0 - Tokenizers 0.13.2
ddb883e06ada0c3868c86ef8051064a3
stabilityai/stable-diffusion-2-inpainting
stabilityai
null
21
188,827
diffusers
164
text-to-image
false
false
false
openrail++
null
null
null
11
3
6
2
10
7
3
['stable-diffusion', 'text-to-image']
false
true
true
12,976
false
# Stable Diffusion v2 Model Card This model card focuses on the model associated with the Stable Diffusion v2, available [here](https://github.com/Stability-AI/stablediffusion). This `stable-diffusion-2-inpainting` model is resumed from [stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) (`512-base-ema.ckpt`) and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning. ![image](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting/resolve/main/merged-leopards.png) - Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `512-inpainting-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting/resolve/main/512-inpainting-ema.ckpt). - Use it with 🧨 [`diffusers`](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting#examples) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 inpainting in a simple and efficient manner. ```bash pip install diffusers transformers accelerate scipy safetensors ``` ```python from diffusers import StableDiffusionInpaintPipeline pipe = StableDiffusionInpaintPipeline.from_pretrained( "stabilityai/stable-diffusion-2-inpainting", torch_dtype=torch.float16, ) prompt = "Face of a yellow cat, high resolution, sitting on a park bench" #image and mask_image should be PIL images. #The mask structure is white for inpainting and black for keeping as is image = pipe(prompt=prompt, image=image, mask_image=mask_image).images[0] image.save("./yellow_cat_on_park_bench.png") ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed) **How it works:** `image` | `mask_image` :-------------------------:|:-------------------------:| <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" alt="drawing" width="300"/> | <img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" alt="drawing" width="300"/> `prompt` | `Output` :-------------------------:|:-------------------------:| <span style="position: relative;bottom: 150px;">Face of a yellow cat, high resolution, sitting on a park bench</span> | <img src="https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/test.png" alt="drawing" width="300"/> # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic. **Training Procedure** Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through the OpenCLIP-ViT/H text-encoder. - The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512. We currently provide the following checkpoints: - `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. 850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`. - `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset. - `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. - `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama). - `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints: ![pareto](model-variants.jpg) Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 200000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq. ## Citation @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
2a56e4ae2189bab36ab3f1d9601c3660
ybelkada/japanese-roberta-question-answering
ybelkada
roberta
9
398
transformers
1
null
true
false
false
cc-by-sa-3.0
['ja']
['SkelterLabsInc/JaQuAD']
null
0
0
0
0
0
0
0
['question-answering', 'extractive-qa']
false
true
true
1,854
false
# RoBERTa base Japanese - JaQuAD ## Description A Japanese Question Answering model fine-tuned on [JaQuAD](https://huggingface.co/datasets/SkelterLabsInc/JaQuAD). Please refer [RoBERTa base Japanese](https://huggingface.co/rinna/japanese-roberta-base) for details about the pre-training model. The codes for the fine-tuning are available [on this notebook](https://huggingface.co/ybelkada/japanese-roberta-question-answering/blob/main/roberta_ja_qa.ipynb) ## Usage ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer question = 'アレクサンダー・グラハム・ベルは、どこで生まれたの?' context = 'アレクサンダー・グラハム・ベルは、スコットランド生まれの科学者、発明家、工学者である。世界初の>実用的電話の発明で知られている。' model = AutoModelForQuestionAnswering.from_pretrained( 'ybelkada/japanese-roberta-question-answering') tokenizer = AutoTokenizer.from_pretrained( 'ybelkada/japanese-roberta-question-answering') inputs = tokenizer( question, context, add_special_tokens=True, return_tensors="pt") input_ids = inputs["input_ids"].tolist()[0] outputs = model(**inputs) answer_start_scores = outputs.start_logits answer_end_scores = outputs.end_logits # Get the most likely beginning of answer with the argmax of the score. answer_start = torch.argmax(answer_start_scores) # Get the most likely end of answer with the argmax of the score. # 1 is added to `answer_end` because the index pointed by score is inclusive. answer_end = torch.argmax(answer_end_scores) + 1 answer = tokenizer.convert_tokens_to_string( tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])) # answer = 'スコットランド' ``` ## License The fine-tuned model is licensed under the [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license. ## Miscellaneous The Q&A widget does not work on that model. Tried also with ```Pipeline``` and I was able to reproduce the error, needs a further investigation
278978f037d63ec67481fe63b9ebfa96
edwardjross/xlm-roberta-base-finetuned-panx-en
edwardjross
xlm-roberta
10
14
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,313
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3792 - F1: 0.6918 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0639 | 1.0 | 74 | 0.5075 | 0.5539 | | 0.491 | 2.0 | 148 | 0.4118 | 0.6510 | | 0.355 | 3.0 | 222 | 0.3792 | 0.6918 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
2c6f2ccf03466d30e6045cdfecef8a3f
mariolinml/deberta-v3-base_nli_2x_v0
mariolinml
deberta-v2
16
10
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
973
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-base_nli_2x_v0 This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
1d22e3a8825ad18cacf332bac1906371
nestoralvaro/t5-small-finetuned-xsum
nestoralvaro
t5
12
3
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['xsum']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,415
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.2928 - Rouge1: 21.4274 - Rouge2: 8.18 - Rougel: 21.3234 - Rougelsum: 21.3185 - Gen Len: 4.9993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.5264 | 1.0 | 12753 | 2.2928 | 21.4274 | 8.18 | 21.3234 | 21.3185 | 4.9993 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
2fea03de4416aaa20e2d222c82b47100
masoumehb/wav2vec2-large-xlsr-turkish-demo-colab
masoumehb
wav2vec2
15
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,080
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-turkish-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.13.3 - Tokenizers 0.10.3
c83501597ba282875697e77ef4e9a1ef
JuandaBula/distilroberta-base-mrpc-glue-juanda-bula
JuandaBula
roberta
17
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
0
0
0
0
0
0
0
['text-classification', 'generated_from_trainer']
true
true
true
1,330
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-mrpc-glue-juanda-bula This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset. It achieves the following results on the evaluation set: - Loss: 0.5684 - Accuracy: 0.8333 - F1: 0.8707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5239 | 1.09 | 500 | 0.6723 | 0.7990 | 0.8610 | | 0.3692 | 2.18 | 1000 | 0.5684 | 0.8333 | 0.8707 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cpu - Datasets 2.7.1 - Tokenizers 0.13.2
f619e723c5252941e18fbd9fa5da7c10
thusken/nb-bert-base-user-needs
thusken
bert
10
2
transformers
0
text-classification
true
false
false
cc-by-4.0
null
null
null
1
0
1
0
0
0
0
['generated_from_trainer']
true
true
true
3,902
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nb-bert-base-user-needs This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on a dataset of 2000 articles from Bergens Tidende, published between 06/01/2020 and 02/02/2020. These articles are labelled as one of six classes / user needs, as introduced by the [BBC in 2017](https://www.linkedin.com/pulse/five-lessons-i-learned-while-digitally-changing-bbc-world-shishkin/) It achieves the following results on the evaluation set: - Loss: 1.0600 - Accuracy: 0.8479 - F1: 0.8319 - Precision: 0.8315 - Recall: 0.8479 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 98 | 1.1222 | 0.6263 | 0.5185 | 0.5076 | 0.6263 | | No log | 2.0 | 196 | 1.0066 | 0.7216 | 0.6436 | 0.5899 | 0.7216 | | No log | 3.0 | 294 | 0.8540 | 0.7577 | 0.7037 | 0.6760 | 0.7577 | | No log | 4.0 | 392 | 0.8621 | 0.7603 | 0.6998 | 0.6568 | 0.7603 | | No log | 5.0 | 490 | 0.8062 | 0.7887 | 0.7500 | 0.7449 | 0.7887 | | 0.91 | 6.0 | 588 | 0.7465 | 0.8041 | 0.7660 | 0.7636 | 0.8041 | | 0.91 | 7.0 | 686 | 0.6324 | 0.8247 | 0.8163 | 0.8187 | 0.8247 | | 0.91 | 8.0 | 784 | 0.7333 | 0.7964 | 0.7703 | 0.7740 | 0.7964 | | 0.91 | 9.0 | 882 | 0.6590 | 0.8325 | 0.8208 | 0.8106 | 0.8325 | | 0.91 | 10.0 | 980 | 0.9854 | 0.8196 | 0.7890 | 0.7920 | 0.8196 | | 0.4246 | 11.0 | 1078 | 0.7023 | 0.8247 | 0.8054 | 0.8138 | 0.8247 | | 0.4246 | 12.0 | 1176 | 0.8995 | 0.8325 | 0.8120 | 0.8068 | 0.8325 | | 0.4246 | 13.0 | 1274 | 0.8589 | 0.8299 | 0.8145 | 0.8058 | 0.8299 | | 0.4246 | 14.0 | 1372 | 0.9859 | 0.8376 | 0.8151 | 0.8123 | 0.8376 | | 0.4246 | 15.0 | 1470 | 0.8452 | 0.8402 | 0.8318 | 0.8341 | 0.8402 | | 0.1637 | 16.0 | 1568 | 1.1156 | 0.8351 | 0.8157 | 0.8196 | 0.8351 | | 0.1637 | 17.0 | 1666 | 1.1514 | 0.8325 | 0.8122 | 0.8218 | 0.8325 | | 0.1637 | 18.0 | 1764 | 1.0092 | 0.8428 | 0.8266 | 0.8320 | 0.8428 | | 0.1637 | 19.0 | 1862 | 1.0368 | 0.8351 | 0.8229 | 0.8287 | 0.8351 | | 0.1637 | 20.0 | 1960 | 1.0600 | 0.8479 | 0.8319 | 0.8315 | 0.8479 | | 0.0391 | 21.0 | 2058 | 1.1046 | 0.8428 | 0.8293 | 0.8269 | 0.8428 | | 0.0391 | 22.0 | 2156 | 1.1178 | 0.8454 | 0.8262 | 0.8280 | 0.8454 | | 0.0391 | 23.0 | 2254 | 1.1103 | 0.8428 | 0.8268 | 0.8295 | 0.8428 | | 0.0391 | 24.0 | 2352 | 1.1179 | 0.8428 | 0.8274 | 0.8313 | 0.8428 | | 0.0391 | 25.0 | 2450 | 1.1134 | 0.8402 | 0.8233 | 0.8254 | 0.8402 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
f0b78dee7c5fba636b185022492dca7a
pszemraj/deberta-v3-xsmall-CoLA
pszemraj
deberta-v2
17
16
transformers
0
text-classification
true
false
false
mit
['en']
['glue']
null
1
0
1
0
0
0
0
['generated_from_trainer']
true
true
true
1,693
false
# deberta-v3-xsmall-CoLA This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.4237 - Matthews Correlation: 0.5895 ## Model description Trying to find a decent optimum between accuracy/quality and inference speed. ```json { "epoch": 3.0, "eval_loss": 0.423, "eval_matthews_correlation": 0.589, "eval_runtime": 5.0422, "eval_samples": 1043, "eval_samples_per_second": 206.853, "eval_steps_per_second": 51.763 } ``` ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 32 - eval_batch_size: 4 - seed: 16105 - distributed_type: multi-GPU - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.3945 | 1.0 | 67 | 0.4323 | 0.5778 | | 0.3214 | 2.0 | 134 | 0.4237 | 0.5895 | | 0.3059 | 3.0 | 201 | 0.4636 | 0.5795 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.8.0 - Tokenizers 0.13.1
96fcb177d5ae4ace6b0a27459f8fffe7
muhtasham/mini-mlm-tweet-target-imdb
muhtasham
bert
10
4
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,539
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mini-mlm-tweet-target-imdb This model is a fine-tuned version of [muhtasham/mini-mlm-tweet](https://huggingface.co/muhtasham/mini-mlm-tweet) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4742 - Accuracy: 0.8324 - F1: 0.9085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4141 | 0.64 | 500 | 0.2415 | 0.9025 | 0.9487 | | 0.3008 | 1.28 | 1000 | 0.2407 | 0.9046 | 0.9499 | | 0.2573 | 1.92 | 1500 | 0.2428 | 0.904 | 0.9496 | | 0.2164 | 2.56 | 2000 | 0.3198 | 0.8753 | 0.9335 | | 0.1918 | 3.2 | 2500 | 0.4742 | 0.8324 | 0.9085 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.13.2
81920bdcaf4556a5ff21148607957635
sd-concepts-library/phan-s-collage
sd-concepts-library
null
9
0
null
1
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,038
false
### Phan's Collage on Stable Diffusion This is the `<pcollage>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<pcollage> 0](https://huggingface.co/sd-concepts-library/phan-s-collage/resolve/main/concept_images/1.jpeg) ![<pcollage> 1](https://huggingface.co/sd-concepts-library/phan-s-collage/resolve/main/concept_images/2.jpeg) ![<pcollage> 2](https://huggingface.co/sd-concepts-library/phan-s-collage/resolve/main/concept_images/0.jpeg) ![<pcollage> 3](https://huggingface.co/sd-concepts-library/phan-s-collage/resolve/main/concept_images/3.jpeg)
d59ef9a16b029b60efd74499f5064902
lincoln/camembert-squadFR-fquad-piaf-answer-extraction
lincoln
camembert
11
7
transformers
0
token-classification
true
false
false
mit
['fr']
['squadFR', 'fquad', 'piaf']
null
0
0
0
0
0
0
0
['camembert', 'answer extraction']
false
true
true
9,016
false
# Extraction de réponse Ce modèle est _fine tuné_ à partir du modèle [camembert-base](https://huggingface.co/camembert-base) pour la tâche de classification de tokens. L'objectif est d'identifier les suites de tokens probables qui pourrait être l'objet d'une question. ## Données d'apprentissage La base d'entrainement est la concatenation des bases SquadFR, [fquad](https://huggingface.co/datasets/fquad), [piaf](https://huggingface.co/datasets/piaf). Les réponses de chaque contexte ont été labelisées avec le label "ANS". Volumétrie (nombre de contexte): * train: 24 652 * test: 1 370 * valid: 1 370 ## Entrainement L'apprentissage s'est effectué sur une carte Tesla K80. * Batch size: 16 * Weight decay: 0.01 * Learning rate: 2x10-5 (décroit linéairement) * Paramètres par défaut de la classe [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) * Total steps: 1 000 Le modèle semble sur apprendre au delà : ![Loss](assets/loss_m_sl_sota_2.PNG) ## Critiques Le modèle n'a pas de bonnes performances et doit être corrigé après prédiction pour être cohérent. La tâche de classification n'est pas évidente car le modèle doit identifier des groupes de token _sachant_ qu'une question peut être posée. ![Performances](assets/perfs_m_sl_sota_2.PNG) ## Utilisation _Le modèle est un POC, nous garantissons pas ses performances_ ```python from transformers import AutoTokenizer, AutoModelForTokenClassification import numpy as np model_name = "lincoln/camembert-squadFR-fquad-piaf-answer-extraction" loaded_tokenizer = AutoTokenizer.from_pretrained(model_path) loaded_model = AutoModelForTokenClassification.from_pretrained(model_path) text = "La science des données est un domaine interdisciplinaire qui utilise des méthodes, des processus,\ des algorithmes et des systèmes scientifiques pour extraire des connaissances et des idées de nombreuses données structurelles et non structurées.\ Elle est souvent associée aux données massives et à l'analyse des données." inputs = loaded_tokenizer(text, return_tensors="pt", return_offsets_mapping=True) outputs = loaded_model(inputs.input_ids).logits probs = 1 / (1 + np.exp(-outputs.detach().numpy())) probs[:, :, 1][0] = np.convolve(probs[:, :, 1][0], np.ones(2), 'same') / 2 sentences = loaded_tokenizer.tokenize(text, add_special_tokens=False) prob_answer_tokens = probs[:, 1:-1, 1].flatten().tolist() offset_start_mapping = inputs.offset_mapping[:, 1:-1, 0].flatten().tolist() offset_end_mapping = inputs.offset_mapping[:, 1:-1, 1].flatten().tolist() threshold = 0.4 entities = [] for ix, (token, prob_ans, offset_start, offset_end) in enumerate(zip(sentences, prob_answer_tokens, offset_start_mapping, offset_end_mapping)): entities.append({ 'entity': 'ANS' if prob_ans > threshold else 'O', 'score': prob_ans, 'index': ix, 'word': token, 'start': offset_start, 'end': offset_end }) for p in entities: print(p) # {'entity': 'O', 'score': 0.3118681311607361, 'index': 0, 'word': '▁La', 'start': 0, 'end': 2} # {'entity': 'O', 'score': 0.37866950035095215, 'index': 1, 'word': '▁science', 'start': 3, 'end': 10} # {'entity': 'ANS', 'score': 0.45018652081489563, 'index': 2, 'word': '▁des', 'start': 11, 'end': 14} # {'entity': 'ANS', 'score': 0.4615934491157532, 'index': 3, 'word': '▁données', 'start': 15, 'end': 22} # {'entity': 'O', 'score': 0.35033443570137024, 'index': 4, 'word': '▁est', 'start': 23, 'end': 26} # {'entity': 'O', 'score': 0.24779987335205078, 'index': 5, 'word': '▁un', 'start': 27, 'end': 29} # {'entity': 'O', 'score': 0.27084410190582275, 'index': 6, 'word': '▁domaine', 'start': 30, 'end': 37} # {'entity': 'O', 'score': 0.3259460926055908, 'index': 7, 'word': '▁in', 'start': 38, 'end': 40} # {'entity': 'O', 'score': 0.371802419424057, 'index': 8, 'word': 'terdisciplinaire', 'start': 40, 'end': 56} # {'entity': 'O', 'score': 0.3140853941440582, 'index': 9, 'word': '▁qui', 'start': 57, 'end': 60} # {'entity': 'O', 'score': 0.2629334330558777, 'index': 10, 'word': '▁utilise', 'start': 61, 'end': 68} # {'entity': 'O', 'score': 0.2968383729457855, 'index': 11, 'word': '▁des', 'start': 69, 'end': 72} # {'entity': 'O', 'score': 0.33898216485977173, 'index': 12, 'word': '▁méthodes', 'start': 73, 'end': 81} # {'entity': 'O', 'score': 0.3776060938835144, 'index': 13, 'word': ',', 'start': 81, 'end': 82} # {'entity': 'O', 'score': 0.3710060119628906, 'index': 14, 'word': '▁des', 'start': 83, 'end': 86} # {'entity': 'O', 'score': 0.35908180475234985, 'index': 15, 'word': '▁processus', 'start': 87, 'end': 96} # {'entity': 'O', 'score': 0.3890596628189087, 'index': 16, 'word': ',', 'start': 96, 'end': 97} # {'entity': 'O', 'score': 0.38341325521469116, 'index': 17, 'word': '▁des', 'start': 101, 'end': 104} # {'entity': 'O', 'score': 0.3743852376937866, 'index': 18, 'word': '▁', 'start': 105, 'end': 106} # {'entity': 'O', 'score': 0.3943936228752136, 'index': 19, 'word': 'algorithme', 'start': 105, 'end': 115} # {'entity': 'O', 'score': 0.39456743001937866, 'index': 20, 'word': 's', 'start': 115, 'end': 116} # {'entity': 'O', 'score': 0.3846966624259949, 'index': 21, 'word': '▁et', 'start': 117, 'end': 119} # {'entity': 'O', 'score': 0.367380827665329, 'index': 22, 'word': '▁des', 'start': 120, 'end': 123} # {'entity': 'O', 'score': 0.3652925491333008, 'index': 23, 'word': '▁systèmes', 'start': 124, 'end': 132} # {'entity': 'O', 'score': 0.3975735306739807, 'index': 24, 'word': '▁scientifiques', 'start': 133, 'end': 146} # {'entity': 'O', 'score': 0.36417365074157715, 'index': 25, 'word': '▁pour', 'start': 147, 'end': 151} # {'entity': 'O', 'score': 0.32438698410987854, 'index': 26, 'word': '▁extraire', 'start': 152, 'end': 160} # {'entity': 'O', 'score': 0.3416857123374939, 'index': 27, 'word': '▁des', 'start': 161, 'end': 164} # {'entity': 'O', 'score': 0.3674810230731964, 'index': 28, 'word': '▁connaissances', 'start': 165, 'end': 178} # {'entity': 'O', 'score': 0.38362061977386475, 'index': 29, 'word': '▁et', 'start': 179, 'end': 181} # {'entity': 'O', 'score': 0.364640474319458, 'index': 30, 'word': '▁des', 'start': 182, 'end': 185} # {'entity': 'O', 'score': 0.36050117015838623, 'index': 31, 'word': '▁idées', 'start': 186, 'end': 191} # {'entity': 'O', 'score': 0.3768993020057678, 'index': 32, 'word': '▁de', 'start': 192, 'end': 194} # {'entity': 'O', 'score': 0.39184248447418213, 'index': 33, 'word': '▁nombreuses', 'start': 195, 'end': 205} # {'entity': 'ANS', 'score': 0.4091200828552246, 'index': 34, 'word': '▁données', 'start': 206, 'end': 213} # {'entity': 'ANS', 'score': 0.41234123706817627, 'index': 35, 'word': '▁structurelle', 'start': 214, 'end': 226} # {'entity': 'ANS', 'score': 0.40243157744407654, 'index': 36, 'word': 's', 'start': 226, 'end': 227} # {'entity': 'ANS', 'score': 0.4007353186607361, 'index': 37, 'word': '▁et', 'start': 228, 'end': 230} # {'entity': 'ANS', 'score': 0.40597623586654663, 'index': 38, 'word': '▁non', 'start': 231, 'end': 234} # {'entity': 'ANS', 'score': 0.40272021293640137, 'index': 39, 'word': '▁structurée', 'start': 235, 'end': 245} # {'entity': 'O', 'score': 0.392631471157074, 'index': 40, 'word': 's', 'start': 245, 'end': 246} # {'entity': 'O', 'score': 0.34266412258148193, 'index': 41, 'word': '.', 'start': 246, 'end': 247} # {'entity': 'O', 'score': 0.26178646087646484, 'index': 42, 'word': '▁Elle', 'start': 255, 'end': 259} # {'entity': 'O', 'score': 0.2265639454126358, 'index': 43, 'word': '▁est', 'start': 260, 'end': 263} # {'entity': 'O', 'score': 0.22844195365905762, 'index': 44, 'word': '▁souvent', 'start': 264, 'end': 271} # {'entity': 'O', 'score': 0.2475772500038147, 'index': 45, 'word': '▁associée', 'start': 272, 'end': 280} # {'entity': 'O', 'score': 0.3002186715602875, 'index': 46, 'word': '▁aux', 'start': 281, 'end': 284} # {'entity': 'O', 'score': 0.3875720798969269, 'index': 47, 'word': '▁données', 'start': 285, 'end': 292} # {'entity': 'ANS', 'score': 0.445063054561615, 'index': 48, 'word': '▁massive', 'start': 293, 'end': 300} # {'entity': 'ANS', 'score': 0.4419114589691162, 'index': 49, 'word': 's', 'start': 300, 'end': 301} # {'entity': 'ANS', 'score': 0.4240635633468628, 'index': 50, 'word': '▁et', 'start': 302, 'end': 304} # {'entity': 'O', 'score': 0.3900952935218811, 'index': 51, 'word': '▁à', 'start': 305, 'end': 306} # {'entity': 'O', 'score': 0.3784807324409485, 'index': 52, 'word': '▁l', 'start': 307, 'end': 308} # {'entity': 'O', 'score': 0.3459452986717224, 'index': 53, 'word': "'", 'start': 308, 'end': 309} # {'entity': 'O', 'score': 0.37636008858680725, 'index': 54, 'word': 'analyse', 'start': 309, 'end': 316} # {'entity': 'ANS', 'score': 0.4475618302822113, 'index': 55, 'word': '▁des', 'start': 317, 'end': 320} # {'entity': 'ANS', 'score': 0.43845775723457336, 'index': 56, 'word': '▁données', 'start': 321, 'end': 328} # {'entity': 'O', 'score': 0.3761221170425415, 'index': 57, 'word': '.', 'start': 328, 'end': 329} ```
1236ce080bb79911b55dba35969c749b
henryscheible/rte_roberta-base_144_v2
henryscheible
null
14
0
null
0
null
true
false
false
mit
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,003
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rte_roberta-base_144_v2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.6194 - Accuracy: 0.7256 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
7dd27ea81817dc40d9b4399a298abd77
mikeadimech/bart-large-cnn-qmsum-meeting-summarization
mikeadimech
bart
11
4
transformers
0
text2text-generation
true
false
false
mit
null
['yawnick/QMSum']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,185
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-qmsum-meeting-summarization This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.7578 - Rouge1: 37.9431 - Rouge2: 10.6366 - Rougel: 25.5782 - Rougelsum: 33.0209 - Gen Len: 72.7714 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 500 - label_smoothing_factor: 0.1 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
30f3e28761b27e4a7625eb7ab5581187
mio/amadeus
mio
null
28
4,617
espnet
42
text-to-speech
false
false
false
cc-by-4.0
['jp']
['amadeus']
null
1
0
0
1
1
1
0
['espnet', 'audio', 'text-to-speech']
false
true
true
10,469
false
## ESPnet2 TTS model ### `mio/amadeus` This model was trained by mio using [amadeus recipe](https://github.com/mio2333/espnet/tree/master/egs2/amadeus/tts1) in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout d5b5ec7b2e77bd3e10707141818b7e6c57ac6b3f pip install -e . cd egs2/amadeus/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model mio/amadeus ``` ## TTS config <details><summary>expand</summary> ``` config: conf/tuning/finetune_vits.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/tts_amadeus_vits_finetune_from_jsut_32_sentence ngpu: 1 seed: 777 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: true sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: false collect_stats: false write_collected_feats: false max_epoch: 2000 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - train - total_count - max keep_nbest_models: 3 nbest_averaging_interval: 0 grad_clip: -1 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: 50 use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: true wandb_project: amadeus wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: - downloads/f3698edf589206588f58f5ec837fa516/exp/tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause/train.total_count.ave_10best.pth:tts:tts ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 5000000 valid_batch_bins: null train_shape_file: - exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/train/text_shape.phn - exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/train/speech_shape valid_shape_file: - exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/valid/text_shape.phn - exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/22k/raw/train/text - text - text - - dump/22k/raw/train/wav.scp - speech - sound valid_data_path_and_name_and_type: - - dump/22k/raw/dev/text - text - text - - dump/22k/raw/dev/wav.scp - speech - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adamw optim_conf: lr: 0.0001 betas: - 0.8 - 0.99 eps: 1.0e-09 weight_decay: 0.0 scheduler: exponentiallr scheduler_conf: gamma: 0.999875 optim2: adamw optim2_conf: lr: 0.0001 betas: - 0.8 - 0.99 eps: 1.0e-09 weight_decay: 0.0 scheduler2: exponentiallr scheduler2_conf: gamma: 0.999875 generator_first: false token_list: - <blank> - <unk> - '1' - '2' - '0' - '3' - '4' - '-1' - '5' - a - o - '-2' - i - '-3' - u - e - k - n - t - '6' - r - '-4' - s - N - m - pau - '7' - sh - d - g - w - '8' - U - '-5' - I - cl - h - y - b - '9' - j - ts - ch - '-6' - z - p - '-7' - f - ky - ry - '-8' - gy - '-9' - hy - ny - '-10' - by - my - '-11' - '-12' - '-13' - py - '-14' - '-15' - v - '10' - '-16' - '-17' - '11' - '-21' - '-20' - '12' - '-19' - '13' - '-18' - '14' - dy - '15' - ty - '-22' - '16' - '18' - '19' - '17' - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: jaconv g2p: pyopenjtalk_accent_with_pause feats_extract: linear_spectrogram feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null normalize: null normalize_conf: {} tts: vits tts_conf: generator_type: vits_generator generator_params: hidden_channels: 192 spks: -1 global_channels: -1 segment_size: 32 text_encoder_attention_heads: 2 text_encoder_ffn_expand: 4 text_encoder_blocks: 6 text_encoder_positionwise_layer_type: conv1d text_encoder_positionwise_conv_kernel_size: 3 text_encoder_positional_encoding_layer_type: rel_pos text_encoder_self_attention_layer_type: rel_selfattn text_encoder_activation_type: swish text_encoder_normalize_before: true text_encoder_dropout_rate: 0.1 text_encoder_positional_dropout_rate: 0.0 text_encoder_attention_dropout_rate: 0.1 use_macaron_style_in_text_encoder: true use_conformer_conv_in_text_encoder: false text_encoder_conformer_kernel_size: -1 decoder_kernel_size: 7 decoder_channels: 512 decoder_upsample_scales: - 8 - 8 - 2 - 2 decoder_upsample_kernel_sizes: - 16 - 16 - 4 - 4 decoder_resblock_kernel_sizes: - 3 - 7 - 11 decoder_resblock_dilations: - - 1 - 3 - 5 - - 1 - 3 - 5 - - 1 - 3 - 5 use_weight_norm_in_decoder: true posterior_encoder_kernel_size: 5 posterior_encoder_layers: 16 posterior_encoder_stacks: 1 posterior_encoder_base_dilation: 1 posterior_encoder_dropout_rate: 0.0 use_weight_norm_in_posterior_encoder: true flow_flows: 4 flow_kernel_size: 5 flow_base_dilation: 1 flow_layers: 4 flow_dropout_rate: 0.0 use_weight_norm_in_flow: true use_only_mean_in_flow: true stochastic_duration_predictor_kernel_size: 3 stochastic_duration_predictor_dropout_rate: 0.5 stochastic_duration_predictor_flows: 4 stochastic_duration_predictor_dds_conv_layers: 3 vocabs: 85 aux_channels: 513 discriminator_type: hifigan_multi_scale_multi_period_discriminator discriminator_params: scales: 1 scale_downsample_pooling: AvgPool1d scale_downsample_pooling_params: kernel_size: 4 stride: 2 padding: 2 scale_discriminator_params: in_channels: 1 out_channels: 1 kernel_sizes: - 15 - 41 - 5 - 3 channels: 128 max_downsample_channels: 1024 max_groups: 16 bias: true downsample_scales: - 2 - 2 - 4 - 4 - 1 nonlinear_activation: LeakyReLU nonlinear_activation_params: negative_slope: 0.1 use_weight_norm: true use_spectral_norm: false follow_official_norm: false periods: - 2 - 3 - 5 - 7 - 11 period_discriminator_params: in_channels: 1 out_channels: 1 kernel_sizes: - 5 - 3 channels: 32 downsample_scales: - 3 - 3 - 3 - 3 - 1 max_downsample_channels: 1024 bias: true nonlinear_activation: LeakyReLU nonlinear_activation_params: negative_slope: 0.1 use_weight_norm: true use_spectral_norm: false generator_adv_loss_params: average_by_discriminators: false loss_type: mse discriminator_adv_loss_params: average_by_discriminators: false loss_type: mse feat_match_loss_params: average_by_discriminators: false average_by_layers: false include_final_outputs: true mel_loss_params: fs: 22050 n_fft: 1024 hop_length: 256 win_length: null window: hann n_mels: 80 fmin: 0 fmax: null log_base: null lambda_adv: 1.0 lambda_mel: 45.0 lambda_feat_match: 2.0 lambda_dur: 1.0 lambda_kl: 1.0 sampling_rate: 22050 cache_generator_outputs: true pitch_extract: null pitch_extract_conf: {} pitch_normalize: null pitch_normalize_conf: {} energy_extract: null energy_extract_conf: {} energy_normalize: null energy_normalize_conf: {} required: - output_dir - token_list version: '202207' distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
4e7bdba9505cbc5611eb5b358a72dda7
MoutainJump/distilbert-base-uncased-finetuned-emotion
MoutainJump
distilbert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,343
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2174 - Accuracy: 0.923 - F1: 0.9231 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8279 | 1.0 | 250 | 0.3099 | 0.9075 | 0.9048 | | 0.2464 | 2.0 | 500 | 0.2174 | 0.923 | 0.9231 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.13.0+cu116 - Datasets 1.16.1 - Tokenizers 0.10.3
14d3d3d969fd4678f40e2291df7119ba
ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli
ynie
roberta
9
18,280
transformers
5
text-classification
true
false
true
mit
null
['snli', 'anli', 'multi_nli', 'multi_nli_mismatch', 'fever']
null
0
0
0
0
0
0
0
[]
false
true
true
3,320
false
This is a strong pre-trained RoBERTa-Large NLI model. The training data is a combination of well-known NLI datasets: [`SNLI`](https://nlp.stanford.edu/projects/snli/), [`MNLI`](https://cims.nyu.edu/~sbowman/multinli/), [`FEVER-NLI`](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [`ANLI (R1, R2, R3)`](https://github.com/facebookresearch/anli). Other pre-trained NLI models including `RoBERTa`, `ALBert`, `BART`, `ELECTRA`, `XLNet` are also available. Trained by [Yixin Nie](https://easonnie.github.io), [original source](https://github.com/facebookresearch/anli). Try the code snippet below. ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch if __name__ == '__main__': max_length = 256 premise = "Two women are embracing while holding to go packages." hypothesis = "The men are fighting outside a deli." hg_model_hub_name = "ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli" # hg_model_hub_name = "ynie/albert-xxlarge-v2-snli_mnli_fever_anli_R1_R2_R3-nli" # hg_model_hub_name = "ynie/bart-large-snli_mnli_fever_anli_R1_R2_R3-nli" # hg_model_hub_name = "ynie/electra-large-discriminator-snli_mnli_fever_anli_R1_R2_R3-nli" # hg_model_hub_name = "ynie/xlnet-large-cased-snli_mnli_fever_anli_R1_R2_R3-nli" tokenizer = AutoTokenizer.from_pretrained(hg_model_hub_name) model = AutoModelForSequenceClassification.from_pretrained(hg_model_hub_name) tokenized_input_seq_pair = tokenizer.encode_plus(premise, hypothesis, max_length=max_length, return_token_type_ids=True, truncation=True) input_ids = torch.Tensor(tokenized_input_seq_pair['input_ids']).long().unsqueeze(0) # remember bart doesn't have 'token_type_ids', remove the line below if you are using bart. token_type_ids = torch.Tensor(tokenized_input_seq_pair['token_type_ids']).long().unsqueeze(0) attention_mask = torch.Tensor(tokenized_input_seq_pair['attention_mask']).long().unsqueeze(0) outputs = model(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, labels=None) # Note: # "id2label": { # "0": "entailment", # "1": "neutral", # "2": "contradiction" # }, predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist() # batch_size only one print("Premise:", premise) print("Hypothesis:", hypothesis) print("Entailment:", predicted_probability[0]) print("Neutral:", predicted_probability[1]) print("Contradiction:", predicted_probability[2]) ``` More in [here](https://github.com/facebookresearch/anli/blob/master/src/hg_api/interactive_eval.py). Citation: ``` @inproceedings{nie-etal-2020-adversarial, title = "Adversarial {NLI}: A New Benchmark for Natural Language Understanding", author = "Nie, Yixin and Williams, Adina and Dinan, Emily and Bansal, Mohit and Weston, Jason and Kiela, Douwe", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", year = "2020", publisher = "Association for Computational Linguistics", } ```
3cce21db6b1586f35edc889b92705458
wyu1/GenRead-3B-TQA-MergeDPR
wyu1
t5
5
0
transformers
0
null
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
720
false
# GenRead (MergeDPR): FiD model trained on TQA -- This is the model checkpoint of GenRead [2], based on the T5-3B and trained on the TriviaQA [1]. -- Hyperparameters: 8 x 80GB A100 GPUs; batch size 16; AdamW; LR 5e-5; best dev at 9000 steps References: [1] TriviaQA: A Large Scale Dataset for Reading Comprehension and Question Answering. ACL 2017 [2] Generate rather than Retrieve: Large Language Models are Strong Context Generators. arXiv 2022 ## Model performance We evaluate it on the TriviaQA dataset, the EM score is 74.41. <a href="https://huggingface.co/exbert/?model=bert-base-uncased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> --- license: cc-by-4.0 ---
074d329357123f4b32257b7c78b89cc5
ultra-coder54732/4-way-detection-prop-16-xlnet
ultra-coder54732
bert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
962
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 4-way-detection-prop-16-xlnet This model is a fine-tuned version of [ultra-coder54732/4-way-detection-prop-16-bert](https://huggingface.co/ultra-coder54732/4-way-detection-prop-16-bert) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
9257b7119e9f3aac6a927fc0914055ee
juliusco/distilbert-base-uncased-finetuned-squad
juliusco
distilbert
10
5
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,334
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.3672 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.1755 | 1.0 | 11066 | 1.1177 | | 0.9004 | 2.0 | 22132 | 1.1589 | | 0.6592 | 3.0 | 33198 | 1.2326 | | 0.4823 | 4.0 | 44264 | 1.3672 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
c5bfa9c7a957a326cca2c66686efa0f4
gbarone77/polibert_sa
gbarone77
bert
9
4
transformers
0
text-classification
true
true
true
mit
['it']
null
null
0
0
0
0
0
0
0
['sentiment', 'Italian']
false
true
true
1,293
false
# 🤗 + polibert_SA - POLItic BERT based Sentiment Analysis ## Model description This model performs sentiment analysis on Italian political twitter sentences. It was trained starting from an instance of "bert-base-italian-uncased-xxl" and fine-tuned on an Italian dataset of tweets. You can try it out at https://www.unideeplearning.com/twitter_sa/ (in italian!) #### Hands-on ```python import torch from torch import nn from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("unideeplearning/polibert_sa") model = AutoModelForSequenceClassification.from_pretrained("unideeplearning/polibert_sa") text = "Giuseppe Rossi è un pessimo politico" input_ids = tokenizer.encode(text, add_special_tokens=True, return_tensors= 'pt') logits, = model(input_ids) logits = logits.squeeze(0) prob = nn.functional.softmax(logits, dim=0) # 0 Negative, 1 Neutral, 2 Positive print(prob.argmax().tolist()) ``` #### Hyperparameters - Optimizer: **AdamW** with learning rate of **2e-5**, epsilon of **1e-8** - Max epochs: **2** - Batch size: **16** ## Acknowledgments Thanks to the support from: the [Hugging Face](https://huggingface.co/), https://www.unioneprofessionisti.com https://www.unideeplearning.com/
681d8055533df3a18a988174bb0e121e
KoichiYasuoka/roberta-base-english-ud-goeswith
KoichiYasuoka
roberta
11
15
transformers
0
token-classification
true
false
false
cc-by-sa-4.0
['en']
['universal_dependencies']
null
0
0
0
0
0
0
0
['english', 'token-classification', 'pos', 'dependency-parsing']
false
true
true
2,809
false
# roberta-base-english-ud-goeswith ## Model Description This is a RoBERTa model for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [roberta-base](https://huggingface.co/roberta-base). ## How to Use ```py class UDgoeswith(object): def __init__(self,bert): from transformers import AutoTokenizer,AutoModelForTokenClassification self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForTokenClassification.from_pretrained(bert) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=self.tokenizer(text,return_offsets_mapping=True) v=[self.tokenizer.cls_token_id]+[t for t,(s,e) in zip(w["input_ids"],w["offset_mapping"]) if s<e]+[self.tokenizer.sep_token_id] x=[v[0:i]+[self.tokenizer.mask_token_id]+v[i+1:]+[j] for i,j in enumerate(v[1:-1],1)] with torch.no_grad(): e=self.model(input_ids=torch.tensor(x)).logits.numpy()[:,1:-2,:] r=[1 if i==0 else -1 if j.endswith("|root") else 0 for i,j in sorted(self.model.config.id2label.items())] e+=numpy.where(numpy.add.outer(numpy.identity(e.shape[0]),r)==0,0,numpy.nan) g=self.model.config.label2id["X|_|goeswith"] r=numpy.tri(e.shape[0]) for i in range(e.shape[0]): for j in range(i+2,e.shape[1]): r[i,j]=r[i,j-1] if numpy.nanargmax(e[i,j-1])==g else 1 e[:,:,g]+=numpy.where(r==0,0,numpy.nan) m=numpy.full((e.shape[0]+1,e.shape[1]+1),numpy.nan) m[1:,1:]=numpy.nanmax(e,axis=2).transpose() p=numpy.zeros(m.shape) p[1:,1:]=numpy.nanargmax(e,axis=2).transpose() for i in range(1,m.shape[0]): m[i,0],m[i,i],p[i,0]=m[i,i],numpy.nan,p[i,i] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] if [0 for i in h if i==0]!=[0]: m[:,0]+=numpy.where(m[:,0]==numpy.nanmax(m[[i for i,j in enumerate(h) if j==0],0]),0,numpy.nan) m[[i for i,j in enumerate(h) if j==0]]+=[0 if i==0 or j==0 else numpy.nan for i,j in enumerate(h)] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u="# text = "+text+"\n" v=[(s,e) for s,e in w["offset_mapping"] if s<e] for i,(s,e) in enumerate(v,1): q=self.model.config.id2label[p[i,h[i]]].split("|") u+="\t".join([str(i),text[s:e],"_",q[0],"_","|".join(q[1:-1]),str(h[i]),q[-1],"_","_" if i<len(v) and e<v[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=UDgoeswith("KoichiYasuoka/roberta-base-english-ud-goeswith") print(nlp("I saw a horse yesterday which had no name")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/). Or without ufal.chu-liu-edmonds: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/roberta-base-english-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("I saw a horse yesterday which had no name")) ```
14dbc1d9272ba768858377b4b0dc9820
jonatasgrosman/exp_w2v2t_en_vp-fr_s51
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['en']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'en']
false
true
true
474
false
# exp_w2v2t_en_vp-fr_s51 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
2a7f3616b4f8a54c8d5451bb7757508b
CompVis/stable-diffusion-v-1-2-original
CompVis
null
5
0
null
7
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
2
0
2
0
0
0
0
['stable-diffusion', 'text-to-image']
false
true
true
10,582
false
# Stable Diffusion v1 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. The **Stable-Diffusion-v-1-2** checkpoint was initialized with the weights of the [Stable-Diffusion-v-1-1](https:/steps/huggingface.co/CompVis/stable-diffusion-v-1-1-original) checkpoint and subsequently fine-tuned on 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. For more information, please refer to [Training](#training). #### Download the weights - [sd-v1-2.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-2-original/resolve/main/sd-v1-2.ckpt) - [sd-v1-2-full-ema.ckpt](https://huggingface.co/CompVis/stable-diffusion-v-1-2-original/resolve/main/sd-v1-2-full-ema.ckpt) This weights are intended to be used with the original [CompVis Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion). If you are looking for the model to use with the D🧨iffusers library, [come here](https://huggingface.co/CompVis/stable-diffusion-v1-2). ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We currently provide three checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`, which were trained as follows, - `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`. 515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
05bdf41504b6bfe7e726b4c30890e6c0
snehatyagi/wav2vec2_test
snehatyagi
wav2vec2
33
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,772
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_test This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 91.1661 - Wer: 0.5714 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 11.9459 | 100.0 | 100 | 46.9901 | 1.0 | | 3.2175 | 200.0 | 200 | 73.0950 | 1.0 | | 1.8117 | 300.0 | 300 | 78.4884 | 0.6735 | | 1.3694 | 400.0 | 400 | 84.0168 | 0.6327 | | 1.1392 | 500.0 | 500 | 85.2083 | 0.5918 | | 0.979 | 600.0 | 600 | 88.9109 | 0.5918 | | 0.8917 | 700.0 | 700 | 89.0310 | 0.5918 | | 0.8265 | 800.0 | 800 | 90.0659 | 0.6122 | | 0.769 | 900.0 | 900 | 91.8476 | 0.5714 | | 0.7389 | 1000.0 | 1000 | 91.1661 | 0.5714 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.6
3e5e105081f5df91be3e3f7a0eb6f055
cochonaki/distilbert-base-uncased-finetuned-cola
cochonaki
distilbert
10
1
transformers
0
text-classification
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,601
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # cochonaki/distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1905 - Validation Loss: 0.5536 - Train Matthews Correlation: 0.5126 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Matthews Correlation | Epoch | |:----------:|:---------------:|:--------------------------:|:-----:| | 0.5118 | 0.4642 | 0.4617 | 0 | | 0.3259 | 0.4709 | 0.4990 | 1 | | 0.1905 | 0.5536 | 0.5126 | 2 | ### Framework versions - Transformers 4.21.1 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
014876b13ed505376ccc0a1b70a2a7c0
north/t5_xl_NCC
north
t5
92
6
transformers
1
text2text-generation
true
false
true
apache-2.0
[False, 'nn', 'sv', 'dk', 'is', 'en']
['nbailab/NCC', 'mc4', 'wikipedia']
null
0
0
0
0
0
0
0
[]
false
true
true
8,352
false
The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation. | |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_| |:-----------|:------------:|:------------:|:------------:|:------------:|:------------:| |North-T5&#8209;NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|✔|[🤗](https://huggingface.co/north/t5_xxl_NCC)|| |North-T5&#8209;NCC&#8209;lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)|| ## T5X Checkpoint The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/xl/norwegian_NCC_plus_English_t5x_xl/). ## Performance A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617). |**Model:** | **F1** | |:-----------|:------------| |mT5-base|73.2 | |mBERT-base|78.4 | |NorBERT-base|78.2 | |North-T5-small|80.5 | |nb-bert-base|81.8 | |North-T5-base|85.3 | |North-T5-large|86.7 | |North-T5-xl|88.7 | |North-T5-xxl|91.8| These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal. ## Sub-versions of North-T5 The following sub-versions are available. More versions will be available shorter. |**Model** | **Description** | |:-----------|:-------| |**North&#8209;T5&#8209;NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.| |**North&#8209;T5&#8209;NCC&#8209;lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.| ## Fine-tuned versions As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab. Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used. * Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base) * DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base) ## Training details All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources. All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks. While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab. All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning. ## Formats All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format. ## Future I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users ## Thanks This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running. Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models. Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X. ## Warranty Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases. ## Contact/About These models were trained by Per E Kummervold. Please contact me on per@capia.no.
89763cd3711ecb3ed2e7f787058382eb
Tomor0720/deberta-large-finetuned-qqp
Tomor0720
deberta
13
2
transformers
0
text-classification
true
false
false
mit
null
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,331
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large-finetuned-qqp This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2635 - Accuracy: 0.8986 - F1: 0.8648 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.4058 | 1.0 | 22741 | 0.3923 | 0.8496 | 0.8108 | | 0.2347 | 2.0 | 45482 | 0.2635 | 0.8986 | 0.8648 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
fa915ce1c3e5e12b97125c6a802e3182
lincoln/mbart-mlsum-automatic-summarization
lincoln
mbart
10
2,242
transformers
3
summarization
true
true
false
mit
['fr']
['MLSUM']
null
0
0
0
0
0
0
0
['summarization', 'mbart', 'bart']
false
true
true
3,349
false
# Résumé automatique d'article de presses Ce modèles est basé sur le modèle [`facebook/mbart-large-50`](https://huggingface.co/facebook/mbart-large-50) et été fine-tuné en utilisant des articles de presse issus de la base de données MLSUM. L'hypothèse à été faite que les chapeaux des articles faisaient de bon résumés de référence. ## Entrainement Nous avons testé deux architecture de modèles (T5 et BART) avec des textes en entrée de 512 ou 1024 tokens. Finallement c'est le modèle BART avec 512 tokens qui à été retenu. Il a été entrainé sur 2 epochs (~700K articles) sur une Tesla V100 (32 heures d'entrainement). ## Résultats ![Score de novelty](assets/novelty.png) Nous avons comparé notre modèle (`mbart-large-512-full` sur le graphique) à deux références: * MBERT qui correspond aux performances du modèle entrainé par l'équipe à l'origine de la base d'articles MLSUM * Barthez qui est un autre modèle basé sur des articles de presses issus de la base de données OrangeSum On voit que le score de novelty (cf papier MLSUM) de notre modèle n'est pas encore comparable à ces deux références et encore moins à une production humaine néanmoins les résumés générés sont dans l'ensemble de bonne qualité. ## Utilisation ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from transformers import SummarizationPipeline model_name = 'lincoln/mbart-mlsum-automatic-summarization' loaded_tokenizer = AutoTokenizer.from_pretrained(model_name) loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name) nlp = SummarizationPipeline(model=loaded_model, tokenizer=loaded_tokenizer) nlp(""" « La veille de l’ouverture, je vais faire venir un coach pour les salariés qui reprendront le travail. Cela va me coûter 300 euros, mais après des mois d’oisiveté obligatoire, la reprise n’est pas simple. Certains sont au chômage partiel depuis mars 2020 », raconte Alain Fontaine, propriétaire du restaurant Le Mesturet, dans le quartier de la Bourse, à Paris. Cette date d’ouverture, désormais, il la connaît. Emmanuel Macron a, en effet, donné le feu vert pour un premier accueil des clients en terrasse, mercredi 19 mai. M. Fontaine imagine même faire venir un orchestre ce jour-là pour fêter l’événement. Il lui reste toutefois à construire sa terrasse. Il pensait que les ouvriers passeraient samedi 1er mai pour l’installer, mais, finalement, le rendez-vous a été décalé. Pour l’instant, le tas de bois est entreposé dans la salle de restaurant qui n’a plus accueilli de convives depuis le 29 octobre 2020, quand le couperet de la fermeture administrative est tombé.M. Fontaine, président de l’Association française des maîtres restaurateurs, ne manquera pas de concurrents prêts à profiter de ce premier temps de réouverture des bars et restaurants. Même si le couvre-feu limite le service à 21 heures. D’autant que la Mairie de Paris vient d’annoncer le renouvellement des terrasses éphémères installées en 2020 et leur gratuité jusqu’à la fin de l’été. """) ``` ## Citation ```bibtex @article{scialom2020mlsum, title={MLSUM: The Multilingual Summarization Corpus}, author={Thomas Scialom and Paul-Alexis Dray and Sylvain Lamprier and Benjamin Piwowarski and Jacopo Staiano}, year={2020}, eprint={2004.14900}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
16872b526299e5aafeea5c3b0868c575
MultiBertGunjanPatrick/multiberts-seed-1-1100k
MultiBertGunjanPatrick
bert
7
3
transformers
0
null
true
false
false
apache-2.0
['en']
['bookcorpus', 'wikipedia']
null
0
0
0
0
0
0
0
['exbert', 'multiberts', 'multiberts-seed-1']
false
true
true
6,487
false
# MultiBERTs Seed 1 Checkpoint 1100k (uncased) Seed 1 intermediate checkpoint 1100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-1100k') model = BertModel.from_pretrained("multiberts-seed-1-1100k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
4020731d012619dcaf4f48dfc7a395c0
XperienciaVirtual/sd-1-5-db-ai-creative-hub-hdbglv
XperienciaVirtual
null
38
3
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image']
false
true
true
2,649
false
### sd-1-5-db-ai-creative-hub-hdbglv Dreambooth model trained by jaimexv with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: hdbglv (use that on your prompt) ![hdbglv 0](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%281%29.jpg)![hdbglv 1](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%282%29.jpg)![hdbglv 2](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%283%29.jpg)![hdbglv 3](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%284%29.jpg)![hdbglv 4](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%285%29.jpg)![hdbglv 5](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%286%29.jpg)![hdbglv 6](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%287%29.jpg)![hdbglv 7](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%288%29.jpg)![hdbglv 8](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%289%29.jpg)![hdbglv 9](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2810%29.jpg)![hdbglv 10](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2811%29.jpg)![hdbglv 11](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2812%29.jpg)![hdbglv 12](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2813%29.jpg)![hdbglv 13](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2814%29.jpg)![hdbglv 14](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2815%29.jpg)![hdbglv 15](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2816%29.jpg)![hdbglv 16](https://huggingface.co/jaimexv/sd-1-5-db-ai-creative-hub-hdbglv/resolve/main/concept_images/hdbglv_%2817%29.jpg)
09fb64595dd63a40aec6f5ff4f662ce8
facebook/dino-vitb16
facebook
vit
5
5,237
transformers
3
feature-extraction
true
false
false
apache-2.0
null
['imagenet-1k']
null
0
0
0
0
0
0
0
['dino', 'vision']
false
true
true
3,204
false
# Vision Transformer (base-sized model, patch size 16) trained using DINO Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294) by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in [this repository](https://github.com/facebookresearch/dino). Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not include any fine-tuned heads. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import ViTFeatureExtractor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('facebook/dino-vitb16') model = ViTModel.from_pretrained('facebook/dino-vitb16') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2104-14294, author = {Mathilde Caron and Hugo Touvron and Ishan Misra and Herv{\'{e}} J{\'{e}}gou and Julien Mairal and Piotr Bojanowski and Armand Joulin}, title = {Emerging Properties in Self-Supervised Vision Transformers}, journal = {CoRR}, volume = {abs/2104.14294}, year = {2021}, url = {https://arxiv.org/abs/2104.14294}, archivePrefix = {arXiv}, eprint = {2104.14294}, timestamp = {Tue, 04 May 2021 15:12:43 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2104-14294.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
cbd7203e50eec93745721e05d31f2a9a