modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-13 00:37:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-13 00:35:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sd-concepts-library/fish
|
sd-concepts-library
| 2022-09-18T06:57:04Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-18T06:56:57Z |
---
license: mit
---
### fish on Stable Diffusion
This is the `<fish>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
sd-concepts-library/dsmuses
|
sd-concepts-library
| 2022-09-18T06:37:28Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-18T06:37:17Z |
---
license: mit
---
### DSmuses on Stable Diffusion
This is the `<DSmuses>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:

|
roupenminassian/swin-tiny-patch4-window7-224-finetuned-eurosat
|
roupenminassian
| 2022-09-18T06:29:15Z | 221 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-18T05:56:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.587248322147651
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6712
- Accuracy: 0.5872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6811 | 1.0 | 21 | 0.6773 | 0.5604 |
| 0.667 | 2.0 | 42 | 0.6743 | 0.5805 |
| 0.6521 | 3.0 | 63 | 0.6712 | 0.5872 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sd-concepts-library/threestooges
|
sd-concepts-library
| 2022-09-18T05:40:11Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-18T05:40:07Z |
---
license: mit
---
### threestooges on Stable Diffusion
This is the `<threestooges>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
gogin333/model
|
gogin333
| 2022-09-18T04:49:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-09-18T04:47:39Z |
летучий глаз с монолизай
|
rosskrasner/testcatdog
|
rosskrasner
| 2022-09-18T03:56:03Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-09-14T03:29:28Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
tkuye/binary-skills-classifier
|
tkuye
| 2022-09-17T23:11:29Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-17T20:42:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: binary-skills-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binary-skills-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1373
- Accuracy: 0.9702
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.098 | 1.0 | 1557 | 0.0917 | 0.9663 |
| 0.0678 | 2.0 | 3114 | 0.0982 | 0.9712 |
| 0.0344 | 3.0 | 4671 | 0.1140 | 0.9712 |
| 0.0239 | 4.0 | 6228 | 0.1373 | 0.9702 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
reinoudbosch/pegasus-samsum
|
reinoudbosch
| 2022-09-17T23:03:24Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-17T22:26:31Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7052 | 0.54 | 500 | 1.4814 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.0
|
sd-concepts-library/cgdonny1
|
sd-concepts-library
| 2022-09-17T22:24:07Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-17T22:24:00Z |
---
license: mit
---
### cgdonny1 on Stable Diffusion
This is the `<donny1>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
anechaev/Reinforce-U5Pixelcopter
|
anechaev
| 2022-09-17T22:11:25Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-17T22:11:15Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-U5Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 17.10 +/- 15.09
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
sd-concepts-library/r-crumb-style
|
sd-concepts-library
| 2022-09-17T21:15:16Z | 0 | 5 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-17T21:15:11Z |
---
license: mit
---
### r crumb style on Stable Diffusion
This is the `<rcrumb>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:







|
sd-concepts-library/3d-female-cyborgs
|
sd-concepts-library
| 2022-09-17T20:15:59Z | 0 | 39 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-17T20:15:45Z |
---
license: mit
---
### 3d Female Cyborgs on Stable Diffusion
This is the `<A female cyborg>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
tkuye/skills-classifier
|
tkuye
| 2022-09-17T19:16:20Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-17T17:56:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: skills-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# skills-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3051
- Accuracy: 0.9242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 312 | 0.2713 | 0.9058 |
| 0.361 | 2.0 | 624 | 0.2539 | 0.9182 |
| 0.361 | 3.0 | 936 | 0.2802 | 0.9238 |
| 0.1532 | 4.0 | 1248 | 0.3058 | 0.9202 |
| 0.0899 | 5.0 | 1560 | 0.3051 | 0.9242 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sd-concepts-library/wish-artist-stile
|
sd-concepts-library
| 2022-09-17T19:03:21Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-17T19:03:15Z |
---
license: mit
---
### Wish artist stile on Stable Diffusion
This is the `<wish-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
Tritkoman/Kvenfinnishtranslator
|
Tritkoman
| 2022-09-17T18:38:22Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"translation",
"en",
"fi",
"dataset:Tritkoman/autotrain-data-wnkeknrr",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-09-17T18:36:53Z |
---
tags:
- autotrain
- translation
language:
- en
- fi
datasets:
- Tritkoman/autotrain-data-wnkeknrr
co2_eq_emissions:
emissions: 0.007023045912239053
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 1495654541
- CO2 Emissions (in grams): 0.0070
## Validation Metrics
- Loss: 2.873
- SacreBLEU: 22.653
- Gen len: 7.114
|
dumitrescustefan/gpt-neo-romanian-780m
|
dumitrescustefan
| 2022-09-17T18:24:19Z | 260 | 12 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"romanian",
"text generation",
"causal lm",
"gpt-neo",
"ro",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-29T15:31:26Z |
---
language:
- ro
license: mit # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
tags:
- romanian
- text generation
- causal lm
- gpt-neo
---
# GPT-Neo Romanian 780M
This model is a GPT-Neo transformer decoder model designed using EleutherAI's replication of the GPT-3 architecture.
It was trained on a thoroughly cleaned corpus of Romanian text of about 40GB composed of Oscar, Opus, Wikipedia, literature and various other bits and pieces of text, joined together and deduplicated. It was trained for about a month, totaling 1.5M steps on a v3-32 TPU machine.
### Authors:
* Dumitrescu Stefan
* Mihai Ilie
### Evaluation
Evaluation to be added soon, also on [https://github.com/dumitrescustefan/Romanian-Transformers](https://github.com/dumitrescustefan/Romanian-Transformers)
### Acknowledgements
Thanks [TPU Research Cloud](https://sites.research.google/trc/about/) for the TPUv3 machine needed to train this model!
|
sd-concepts-library/hiten-style-nao
|
sd-concepts-library
| 2022-09-17T17:52:12Z | 0 | 26 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-17T17:43:38Z |
---
license: mit
---
### NOTE: USED WAIFU DIFFUSION
<https://huggingface.co/hakurei/waifu-diffusion>
### hiten-style-nao on Stable Diffusion
Artist: <https://www.pixiv.net/en/users/490219>
This is the `<hiten-style-nao>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:








|
sd-concepts-library/mechasoulall
|
sd-concepts-library
| 2022-09-17T17:44:02Z | 0 | 21 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-17T17:43:55Z |
---
license: mit
---
### mechasoulall on Stable Diffusion
This is the `<mechasoulall>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:










































|
sd-concepts-library/durer-style
|
sd-concepts-library
| 2022-09-17T16:36:56Z | 0 | 7 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-17T16:36:49Z |
---
license: mit
---
### durer style on Stable Diffusion
This is the `<drr-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
sd-concepts-library/led-toy
|
sd-concepts-library
| 2022-09-17T16:33:57Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-17T16:33:50Z |
---
license: mit
---
### led-toy on Stable Diffusion
This is the `<led-toy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
sd-concepts-library/she-hulk-law-art
|
sd-concepts-library
| 2022-09-17T16:10:47Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-17T16:10:35Z |
---
license: mit
---
### She-Hulk Law Art on Stable Diffusion
This is the `<shehulk-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
theojolliffe/pegasus-model-3-x25
|
theojolliffe
| 2022-09-17T15:48:03Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-17T14:27:08Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-model-3-x25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-model-3-x25
This model is a fine-tuned version of [theojolliffe/pegasus-cnn_dailymail-v4-e1-e4-feedback](https://huggingface.co/theojolliffe/pegasus-cnn_dailymail-v4-e1-e4-feedback) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5668
- Rouge1: 61.9972
- Rouge2: 48.1531
- Rougel: 48.845
- Rougelsum: 59.5019
- Gen Len: 123.0814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:--------:|
| 1.144 | 1.0 | 883 | 0.5668 | 61.9972 | 48.1531 | 48.845 | 59.5019 | 123.0814 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Tritkoman/Interlinguetranslator
|
Tritkoman
| 2022-09-17T15:45:24Z | 94 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"translation",
"en",
"es",
"dataset:Tritkoman/autotrain-data-akakka",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-09-17T15:07:31Z |
---
tags:
- autotrain
- translation
language:
- en
- es
datasets:
- Tritkoman/autotrain-data-akakka
co2_eq_emissions:
emissions: 0.26170356193686023
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 1492154444
- CO2 Emissions (in grams): 0.2617
## Validation Metrics
- Loss: 0.770
- SacreBLEU: 62.097
- Gen len: 8.635
|
matemato/q-Taxi-v3
|
matemato
| 2022-09-17T15:11:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-17T15:11:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="matemato/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Eksperymenty/Pong-PLE-v0
|
Eksperymenty
| 2022-09-17T14:44:18Z | 0 | 0 | null |
[
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-17T14:44:08Z |
---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pong-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
DeividasM/finetuning-sentiment-model-3000-samples
|
DeividasM
| 2022-09-17T13:05:46Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-17T12:51:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.877887788778878
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3275
- Accuracy: 0.8767
- F1: 0.8779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jayanta/swin-base-patch4-window7-224-20epochs-finetuned-memes
|
jayanta
| 2022-09-17T13:02:25Z | 216 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-17T12:07:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-base-patch4-window7-224-20epochs-finetuned-memes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.847758887171561
- task:
type: image-classification
name: Image Classification
dataset:
type: custom
name: custom
split: test
metrics:
- type: f1
value: 0.8504084378729573
name: F1
- type: precision
value: 0.8519647060733512
name: Precision
- type: recall
value: 0.8523956723338485
name: Recall
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-base-patch4-window7-224-20epochs-finetuned-memes
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7090
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0238 | 0.99 | 40 | 0.9636 | 0.6445 |
| 0.777 | 1.99 | 80 | 0.6591 | 0.7666 |
| 0.4763 | 2.99 | 120 | 0.5381 | 0.8130 |
| 0.3215 | 3.99 | 160 | 0.5244 | 0.8253 |
| 0.2179 | 4.99 | 200 | 0.5123 | 0.8238 |
| 0.1868 | 5.99 | 240 | 0.5052 | 0.8308 |
| 0.154 | 6.99 | 280 | 0.5444 | 0.8338 |
| 0.1166 | 7.99 | 320 | 0.6318 | 0.8238 |
| 0.1099 | 8.99 | 360 | 0.5656 | 0.8338 |
| 0.0925 | 9.99 | 400 | 0.6057 | 0.8338 |
| 0.0779 | 10.99 | 440 | 0.5942 | 0.8393 |
| 0.0629 | 11.99 | 480 | 0.6112 | 0.8400 |
| 0.0742 | 12.99 | 520 | 0.6588 | 0.8331 |
| 0.0752 | 13.99 | 560 | 0.6143 | 0.8408 |
| 0.0577 | 14.99 | 600 | 0.6450 | 0.8516 |
| 0.0589 | 15.99 | 640 | 0.6787 | 0.8400 |
| 0.0555 | 16.99 | 680 | 0.6641 | 0.8454 |
| 0.052 | 17.99 | 720 | 0.7213 | 0.8524 |
| 0.0589 | 18.99 | 760 | 0.6917 | 0.8470 |
| 0.0506 | 19.99 | 800 | 0.7090 | 0.8478 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
test1234678/distilbert-base-uncased-distilled-clinc
|
test1234678
| 2022-09-17T12:34:43Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-17T07:24:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: train
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9461290322580646
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2712
- Accuracy: 0.9461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2629 | 1.0 | 318 | 1.6048 | 0.7368 |
| 1.2437 | 2.0 | 636 | 0.8148 | 0.8565 |
| 0.6604 | 3.0 | 954 | 0.4768 | 0.9161 |
| 0.4054 | 4.0 | 1272 | 0.3548 | 0.9352 |
| 0.2987 | 5.0 | 1590 | 0.3084 | 0.9419 |
| 0.2549 | 6.0 | 1908 | 0.2909 | 0.9435 |
| 0.232 | 7.0 | 2226 | 0.2804 | 0.9458 |
| 0.221 | 8.0 | 2544 | 0.2749 | 0.9458 |
| 0.2145 | 9.0 | 2862 | 0.2722 | 0.9468 |
| 0.2112 | 10.0 | 3180 | 0.2712 | 0.9461 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.10.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Shamus/NLLB-600m-vie_Latn-to-eng_Latn
|
Shamus
| 2022-09-17T11:54:50Z | 107 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-17T03:28:00Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: NLLB-600m-vie_Latn-to-eng_Latn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLLB-600m-vie_Latn-to-eng_Latn
This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1189
- Bleu: 36.6767
- Gen Len: 47.504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.9294 | 2.24 | 1000 | 1.5970 | 23.6201 | 48.1 |
| 1.4 | 4.47 | 2000 | 1.3216 | 28.9526 | 45.156 |
| 1.2071 | 6.71 | 3000 | 1.2245 | 32.5538 | 46.576 |
| 1.0893 | 8.95 | 4000 | 1.1720 | 34.265 | 46.052 |
| 1.0064 | 11.19 | 5000 | 1.1497 | 34.9249 | 46.508 |
| 0.9562 | 13.42 | 6000 | 1.1331 | 36.4619 | 47.244 |
| 0.9183 | 15.66 | 7000 | 1.1247 | 36.4723 | 47.26 |
| 0.8858 | 17.9 | 8000 | 1.1198 | 36.7058 | 47.376 |
| 0.8651 | 20.13 | 9000 | 1.1201 | 36.7897 | 47.496 |
| 0.8546 | 22.37 | 10000 | 1.1189 | 36.6767 | 47.504 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sd-concepts-library/uzumaki
|
sd-concepts-library
| 2022-09-17T11:40:47Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-17T11:40:41Z |
---
license: mit
---
### UZUMAKI on Stable Diffusion
This is the `<NARUTO>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
















|
pnr-svc/distilbert-turkish-ner
|
pnr-svc
| 2022-09-17T11:09:26Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:ner-tr",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-17T10:53:29Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- ner-tr
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-turkish-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: ner-tr
type: ner-tr
config: NERTR
split: train
args: NERTR
metrics:
- name: Precision
type: precision
value: 1.0
- name: Recall
type: recall
value: 1.0
- name: F1
type: f1
value: 1.0
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-turkish-ner
This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on the ner-tr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0013
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 0.5744 | 1.0 | 529 | 0.0058 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0094 | 2.0 | 1058 | 0.0017 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0047 | 3.0 | 1587 | 0.0013 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
LanYiU/distilbert-base-uncased-finetuned-imdb
|
LanYiU
| 2022-09-17T11:04:50Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-09-17T10:55:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7 | 1.0 | 157 | 2.4988 |
| 2.5821 | 2.0 | 314 | 2.4242 |
| 2.541 | 3.0 | 471 | 2.4371 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.9.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Eksperymenty/Reinforce-CartPole-v1
|
Eksperymenty
| 2022-09-17T10:09:00Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-17T10:07:54Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 445.10 +/- 56.96
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Gxl/MINI
|
Gxl
| 2022-09-17T08:24:39Z | 0 | 0 | null |
[
"license:afl-3.0",
"region:us"
] | null | 2022-09-07T11:45:56Z |
---
license: afl-3.0
---
11
# 1
23
3224
342
## 324
432455
23445
455
#### 32424
34442
|
sd-concepts-library/ouroboros
|
sd-concepts-library
| 2022-09-17T02:34:14Z | 0 | 3 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-17T02:34:09Z |
---
license: mit
---
### Ouroboros on Stable Diffusion
This is the `<ouroboros>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:








|
sd-concepts-library/dtv-pkmn
|
sd-concepts-library
| 2022-09-17T01:25:50Z | 0 | 5 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-13T23:08:57Z |
---
license: mit
---
### dtv-pkmn on Stable Diffusion
This is the `<dtv-pkm2>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).

`"hyperdetailed fantasy (monster) (dragon-like) character on top of a rock in the style of <dtv-pkm2> . extremely detailed, amazing artwork with depth and realistic CINEMATIC lighting, matte painting"`
Here is the new concept you will be able to use as a `style`:




|
g30rv17ys/ddpm-geeve-dme-1000-128
|
g30rv17ys
| 2022-09-16T22:45:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-09-16T20:29:37Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-dme-1000-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-dme-1000-128/tensorboard?#scalars)
|
g30rv17ys/ddpm-geeve-cnv-1000-128
|
g30rv17ys
| 2022-09-16T22:44:56Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-09-16T20:19:10Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-cnv-1000-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-cnv-1000-128/tensorboard?#scalars)
|
sd-concepts-library/jamie-hewlett-style
|
sd-concepts-library
| 2022-09-16T22:32:42Z | 0 | 14 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-16T22:32:38Z |
---
license: mit
---
### Jamie Hewlett Style on Stable Diffusion
This is the `<hewlett>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:






|
rhiga/a2c-AntBulletEnv-v0
|
rhiga
| 2022-09-16T22:26:26Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-16T22:25:06Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 1742.04 +/- 217.69
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
matemato/q-FrozenLake-v1-4x4-noSlippery
|
matemato
| 2022-09-16T22:04:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-16T22:04:10Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="matemato/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
sd-concepts-library/lugal-ki-en
|
sd-concepts-library
| 2022-09-16T19:32:47Z | 0 | 14 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-16T05:58:43Z |
---
title: Lugal Ki EN
emoji: 🪐
colorFrom: gray
colorTo: red
sdk: gradio
sdk_version: 3.3
app_file: app.py
pinned: false
license: mit
---
### Lugal ki en on Stable Diffusion
This is the `<lugal-ki-en>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
sd-concepts-library/harmless-ai-house-style-1
|
sd-concepts-library
| 2022-09-16T19:21:04Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-16T19:20:03Z |
---
license: mit
---
### Harmless ai house style 1 on Stable Diffusion
This is the `<bee-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:
,+The+computer+is+the+enemy+of+transhumanity,+detailed,+beautiful+masterpiece,+unreal+engine,+4k-0.024599999999999973.png)
,+The+computer+is+the+enemy+of+transhumanity,+detailed,+beautiful+masterpiece,+unreal+engine,+4k-0.02-3024.png)





|
sanchit-gandhi/wav2vec2-ctc-earnings22-baseline-5-gram
|
sanchit-gandhi
| 2022-09-16T18:50:03Z | 70 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-09-16T18:34:22Z |
Unrolled PT and FX weights of https://huggingface.co/sanchit-gandhi/flax-wav2vec2-ctc-earnings22-baseline/tree/main
|
MayaGalvez/bert-base-multilingual-cased-finetuned-pos
|
MayaGalvez
| 2022-09-16T18:35:53Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-16T18:16:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-multilingual-cased-finetuned-pos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-pos
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1736
- Precision: 0.9499
- Recall: 0.9504
- F1: 0.9501
- Accuracy: 0.9551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7663 | 0.27 | 200 | 0.2047 | 0.9318 | 0.9312 | 0.9315 | 0.9388 |
| 0.5539 | 0.53 | 400 | 0.1815 | 0.9381 | 0.9404 | 0.9392 | 0.9460 |
| 0.5222 | 0.8 | 600 | 0.1787 | 0.9400 | 0.9424 | 0.9412 | 0.9468 |
| 0.5084 | 1.07 | 800 | 0.1591 | 0.9470 | 0.9463 | 0.9467 | 0.9519 |
| 0.4703 | 1.33 | 1000 | 0.1622 | 0.9456 | 0.9458 | 0.9457 | 0.9510 |
| 0.5005 | 1.6 | 1200 | 0.1666 | 0.9470 | 0.9464 | 0.9467 | 0.9519 |
| 0.4677 | 1.87 | 1400 | 0.1583 | 0.9483 | 0.9483 | 0.9483 | 0.9532 |
| 0.4704 | 2.13 | 1600 | 0.1635 | 0.9472 | 0.9475 | 0.9473 | 0.9528 |
| 0.4639 | 2.4 | 1800 | 0.1569 | 0.9475 | 0.9488 | 0.9482 | 0.9536 |
| 0.4627 | 2.67 | 2000 | 0.1605 | 0.9474 | 0.9478 | 0.9476 | 0.9527 |
| 0.4608 | 2.93 | 2200 | 0.1535 | 0.9485 | 0.9495 | 0.9490 | 0.9538 |
| 0.4306 | 3.2 | 2400 | 0.1646 | 0.9489 | 0.9487 | 0.9488 | 0.9536 |
| 0.4583 | 3.47 | 2600 | 0.1642 | 0.9488 | 0.9495 | 0.9491 | 0.9539 |
| 0.453 | 3.73 | 2800 | 0.1646 | 0.9498 | 0.9505 | 0.9501 | 0.9554 |
| 0.4347 | 4.0 | 3000 | 0.1629 | 0.9494 | 0.9504 | 0.9499 | 0.9552 |
| 0.4425 | 4.27 | 3200 | 0.1738 | 0.9495 | 0.9502 | 0.9498 | 0.9550 |
| 0.4335 | 4.53 | 3400 | 0.1733 | 0.9499 | 0.9506 | 0.9503 | 0.9550 |
| 0.4306 | 4.8 | 3600 | 0.1736 | 0.9499 | 0.9504 | 0.9501 | 0.9551 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
wyu1/FiD-NQ
|
wyu1
| 2022-09-16T16:34:33Z | 47 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"license:cc-by-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-08-18T22:15:17Z |
---
license: cc-by-4.0
---
# FiD model trained on NQ
-- This is the model checkpoint of FiD [2], based on the T5 large (with 770M parameters) and trained on the natural question (NQ) dataset [1].
-- Hyperparameters: 8 x 40GB A100 GPUs; batch size 8; AdamW; LR 3e-5; 50000 steps
References:
[1] Natural Questions: A Benchmark for Question Answering Research. TACL 2019.
[2] Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. EACL 2021.
## Model performance
We evaluate it on the NQ dataset, the EM score is 51.3 (0.1 lower than original performance reported in the paper).
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
shamr9/autotrain-firsttransformersproject-1478954182
|
shamr9
| 2022-09-16T15:46:18Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"summarization",
"ar",
"dataset:shamr9/autotrain-data-firsttransformersproject",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-09-16T05:53:23Z |
---
tags:
- autotrain
- summarization
language:
- ar
widget:
- text: "I love AutoTrain 🤗"
datasets:
- shamr9/autotrain-data-firsttransformersproject
co2_eq_emissions:
emissions: 5.113476145275885
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1478954182
- CO2 Emissions (in grams): 5.1135
## Validation Metrics
- Loss: 0.534
- Rouge1: 4.247
- Rouge2: 0.522
- RougeL: 4.260
- RougeLsum: 4.241
- Gen Len: 18.928
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/shamr9/autotrain-firsttransformersproject-1478954182
```
|
sd-concepts-library/diaosu-toy
|
sd-concepts-library
| 2022-09-16T14:53:35Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-16T14:53:28Z |
---
license: mit
---
### diaosu toy on Stable Diffusion
This is the `<diaosu-toy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:



|
bibekitani123/finetuning-sentiment-model-3000-samples
|
bibekitani123
| 2022-09-16T14:46:45Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-15T21:05:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8684210526315789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3132
- Accuracy: 0.8667
- F1: 0.8684
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
pyronear/rexnet1_5x
|
pyronear
| 2022-09-16T12:47:25Z | 64 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:pyronear/openfire",
"arxiv:2007.00992",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-17T20:30:57Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- pyronear/openfire
---
# ReXNet-1.5x model
Pretrained on a dataset for wildfire binary classification (soon to be shared). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:
```shell
pip install pyrovision
```
or using [conda](https://anaconda.org/pyronear/pyrovision):
```shell
conda install -c pyronear pyrovision
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from pyrovision.models import model_from_hf_hub
model = model_from_hf_hub("pyronear/rexnet1_5x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
pyronear/rexnet1_3x
|
pyronear
| 2022-09-16T12:46:31Z | 65 | 1 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"image-classification",
"dataset:pyronear/openfire",
"arxiv:2007.00992",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-17T20:30:22Z |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- pyronear/openfire
---
# ReXNet-1.3x model
Pretrained on a dataset for wildfire binary classification (soon to be shared). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:
```shell
pip install pyrovision
```
or using [conda](https://anaconda.org/pyronear/pyrovision):
```shell
conda install -c pyronear pyrovision
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from pyrovision.models import model_from_hf_hub
model = model_from_hf_hub("pyronear/rexnet1_3x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
test1234678/distilbert-base-uncased-finetuned-clinc
|
test1234678
| 2022-09-16T12:22:33Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-16T12:17:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: train
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9151612903225806
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7773
- Accuracy: 0.9152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.293 | 1.0 | 318 | 3.2831 | 0.7432 |
| 2.6252 | 2.0 | 636 | 1.8743 | 0.8310 |
| 1.5406 | 3.0 | 954 | 1.1575 | 0.8939 |
| 1.0105 | 4.0 | 1272 | 0.8626 | 0.9094 |
| 0.7962 | 5.0 | 1590 | 0.7773 | 0.9152 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.10.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dwisaji/SentimentBert
|
dwisaji
| 2022-09-16T12:09:42Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-16T12:01:39Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SentimentBert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SentimentBert
This model is a fine-tuned version of [cahya/bert-base-indonesian-522M](https://huggingface.co/cahya/bert-base-indonesian-522M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2005
- Accuracy: 0.965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 275 | 0.7807 | 0.715 |
| 0.835 | 2.0 | 550 | 1.0588 | 0.635 |
| 0.835 | 3.0 | 825 | 0.2764 | 0.94 |
| 0.5263 | 4.0 | 1100 | 0.1913 | 0.97 |
| 0.5263 | 5.0 | 1375 | 0.2005 | 0.965 |
### Framework versions
- Transformers 4.22.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
MGanesh29/parrot_paraphraser_on_T5-finetuned-xsum-v5
|
MGanesh29
| 2022-09-16T11:40:33Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-16T09:35:53Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: parrot_paraphraser_on_T5-finetuned-xsum-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# parrot_paraphraser_on_T5-finetuned-xsum-v5
This model is a fine-tuned version of [prithivida/parrot_paraphraser_on_T5](https://huggingface.co/prithivida/parrot_paraphraser_on_T5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0345
- Rouge1: 86.5078
- Rouge2: 84.8978
- Rougel: 86.4798
- Rougelsum: 86.4726
- Gen Len: 17.8462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.0663 | 1.0 | 2002 | 0.0539 | 86.0677 | 84.063 | 86.0423 | 86.0313 | 17.8671 |
| 0.0449 | 2.0 | 4004 | 0.0388 | 86.4564 | 84.7606 | 86.432 | 86.4212 | 17.8501 |
| 0.0269 | 3.0 | 6006 | 0.0347 | 86.4997 | 84.8907 | 86.4814 | 86.4744 | 17.8501 |
| 0.023 | 4.0 | 8008 | 0.0345 | 86.5078 | 84.8978 | 86.4798 | 86.4726 | 17.8462 |
### Framework versions
- Transformers 4.22.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
slplab/wav2vec2-xls-r-300m-japanese-hiragana
|
slplab
| 2022-09-16T11:01:54Z | 76 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ja",
"dataset:common_voice",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-09-16T07:34:58Z |
---
language: ja
datasets:
- common_voice
metrics:
- wer
- cer
model-index:
- name: wav2vec2-xls-r-300m finetuned on Japanese Hiragana with no word boundaries by Hyungshin Ryu of SLPlab
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Japanese
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 90.66
- name: Test CER
type: cer
value: 19.35
---
# Wav2Vec2-XLS-R-300M-Japanese-Hiragana
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Japanese Hiragana characters using the [Common Voice](https://huggingface.co/datasets/common_voice) and [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut).
The sentence outputs do not contain word boundaries. Audio inputs should be sampled at 16kHz.
## Usage
The model can be used directly as follows:
```python
!pip install mecab-python3
!pip install unidic-lite
!pip install pykakasi
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import pykakasi
import MeCab
import re
# load datasets, processor, and model
test_dataset = load_dataset("common_voice", "ja", split="test")
wer = load_metric("wer")
cer = load_metric("cer")
PTM = "slplab/wav2vec2-xls-r-300m-japanese-hiragana"
print("PTM:", PTM)
processor = Wav2Vec2Processor.from_pretrained(PTM)
model = Wav2Vec2ForCTC.from_pretrained(PTM)
device = "cuda"
model.to(device)
# preprocess datasets
wakati = MeCab.Tagger("-Owakati")
kakasi = pykakasi.kakasi()
chars_to_ignore_regex = "[、,。]"
def speech_file_to_array_fn_hiragana_nospace(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).strip()
batch["sentence"] = ''.join([d['hira'] for d in kakasi.convert(batch["sentence"])])
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16000)
batch["speech"] = resampler(speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn_hiragana_nospace)
#evaluate
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(device)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
for i in range(10):
print("="*20)
print("Prd:", result[i]["pred_strings"])
print("Ref:", result[i]["sentence"])
print("WER: {:2f}%".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}%".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
| Original Text | Prediction |
| ------------- | ------------- |
| この料理は家庭で作れます。 | このりょうりはかていでつくれます |
| 日本人は、決して、ユーモアと無縁な人種ではなかった。 | にっぽんじんはけしてゆうもあどむえんなじんしゅではなかった |
| 木村さんに電話を貸してもらいました。 | きむらさんにでんわおかしてもらいました |
## Test Results
**WER:** 90.66%,
**CER:** 19.35%
## Training
Trained on JSUT and train+valid set of Common Voice Japanese. Tested on test set of Common Voice Japanese.
|
g30rv17ys/ddpm-geeve-128
|
g30rv17ys
| 2022-09-16T10:13:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-09-16T07:46:35Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-128/tensorboard?#scalars)
|
dwisaji/Modelroberta
|
dwisaji
| 2022-09-16T09:03:17Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:indonlu",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-16T08:46:21Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- indonlu
model-index:
- name: Modelroberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Modelroberta
This model is a fine-tuned version of [cahya/roberta-base-indonesian-522M](https://huggingface.co/cahya/roberta-base-indonesian-522M) on the indonlu dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.22.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sd-concepts-library/seamless-ground
|
sd-concepts-library
| 2022-09-16T07:36:36Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-16T07:20:22Z |
---
license: mit
---
### <seamless-ground> on Stable Diffusion
This is the `<seamless-ground>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
"a red and black seamless-ground, seamless texture, game art, material, rock and stone"
<img src="https://cdn.discordapp.com/attachments/1017208763964465182/1020235891496726569/allthe.png">
|
Sebabrata/lmv2-g-voterid-117-doc-09-13
|
Sebabrata
| 2022-09-16T07:27:09Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-16T06:55:09Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: lmv2-g-voterid-117-doc-09-13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmv2-g-voterid-117-doc-09-13
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1322
- Age Precision: 1.0
- Age Recall: 1.0
- Age F1: 1.0
- Age Number: 3
- Dob Precision: 1.0
- Dob Recall: 1.0
- Dob F1: 1.0
- Dob Number: 5
- F H M Name Precision: 0.7917
- F H M Name Recall: 0.7917
- F H M Name F1: 0.7917
- F H M Name Number: 24
- Name Precision: 0.8462
- Name Recall: 0.9167
- Name F1: 0.8800
- Name Number: 24
- Sex Precision: 1.0
- Sex Recall: 1.0
- Sex F1: 1.0
- Sex Number: 8
- Voter Id Precision: 0.92
- Voter Id Recall: 0.9583
- Voter Id F1: 0.9388
- Voter Id Number: 24
- Overall Precision: 0.8791
- Overall Recall: 0.9091
- Overall F1: 0.8939
- Overall Accuracy: 0.9836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Age Precision | Age Recall | Age F1 | Age Number | Dob Precision | Dob Recall | Dob F1 | Dob Number | F H M Name Precision | F H M Name Recall | F H M Name F1 | F H M Name Number | Name Precision | Name Recall | Name F1 | Name Number | Sex Precision | Sex Recall | Sex F1 | Sex Number | Voter Id Precision | Voter Id Recall | Voter Id F1 | Voter Id Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|:----------:|:------:|:----------:|:-------------:|:----------:|:------:|:----------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:--------------:|:-----------:|:-------:|:-----------:|:-------------:|:----------:|:------:|:----------:|:------------------:|:---------------:|:-----------:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.5488 | 1.0 | 93 | 1.2193 | 0.0 | 0.0 | 0.0 | 3 | 0.0 | 0.0 | 0.0 | 5 | 0.0 | 0.0 | 0.0 | 24 | 0.0 | 0.0 | 0.0 | 24 | 0.0 | 0.0 | 0.0 | 8 | 1.0 | 0.0833 | 0.1538 | 24 | 1.0 | 0.0227 | 0.0444 | 0.9100 |
| 1.0594 | 2.0 | 186 | 0.8695 | 0.0 | 0.0 | 0.0 | 3 | 0.0 | 0.0 | 0.0 | 5 | 0.0 | 0.0 | 0.0 | 24 | 0.0 | 0.0 | 0.0 | 24 | 0.0 | 0.0 | 0.0 | 8 | 0.6286 | 0.9167 | 0.7458 | 24 | 0.6286 | 0.25 | 0.3577 | 0.9173 |
| 0.763 | 3.0 | 279 | 0.6057 | 0.0 | 0.0 | 0.0 | 3 | 0.0 | 0.0 | 0.0 | 5 | 0.0667 | 0.0417 | 0.0513 | 24 | 0.0 | 0.0 | 0.0 | 24 | 0.0 | 0.0 | 0.0 | 8 | 0.6875 | 0.9167 | 0.7857 | 24 | 0.4694 | 0.2614 | 0.3358 | 0.9228 |
| 0.5241 | 4.0 | 372 | 0.4257 | 0.0 | 0.0 | 0.0 | 3 | 0.0 | 0.0 | 0.0 | 5 | 0.0 | 0.0 | 0.0 | 24 | 0.2381 | 0.4167 | 0.3030 | 24 | 0.0 | 0.0 | 0.0 | 8 | 0.7097 | 0.9167 | 0.8000 | 24 | 0.4384 | 0.3636 | 0.3975 | 0.9331 |
| 0.3847 | 5.0 | 465 | 0.3317 | 0.0 | 0.0 | 0.0 | 3 | 0.3333 | 0.4 | 0.3636 | 5 | 0.3889 | 0.2917 | 0.3333 | 24 | 0.2745 | 0.5833 | 0.3733 | 24 | 1.0 | 0.75 | 0.8571 | 8 | 0.88 | 0.9167 | 0.8980 | 24 | 0.4811 | 0.5795 | 0.5258 | 0.9574 |
| 0.3015 | 6.0 | 558 | 0.2654 | 0.0 | 0.0 | 0.0 | 3 | 0.3333 | 0.4 | 0.3636 | 5 | 0.48 | 0.5 | 0.4898 | 24 | 0.4737 | 0.75 | 0.5806 | 24 | 0.8889 | 1.0 | 0.9412 | 8 | 0.8462 | 0.9167 | 0.8800 | 24 | 0.5962 | 0.7045 | 0.6458 | 0.9653 |
| 0.2233 | 7.0 | 651 | 0.2370 | 1.0 | 0.6667 | 0.8 | 3 | 0.6667 | 0.8 | 0.7273 | 5 | 0.6957 | 0.6667 | 0.6809 | 24 | 0.625 | 0.8333 | 0.7143 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.8148 | 0.9167 | 0.8627 | 24 | 0.7347 | 0.8182 | 0.7742 | 0.9726 |
| 0.1814 | 8.0 | 744 | 0.2190 | 0.5 | 1.0 | 0.6667 | 3 | 0.6667 | 0.8 | 0.7273 | 5 | 0.6818 | 0.625 | 0.6522 | 24 | 0.7 | 0.875 | 0.7778 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.88 | 0.9167 | 0.8980 | 24 | 0.7526 | 0.8295 | 0.7892 | 0.9708 |
| 0.1547 | 9.0 | 837 | 0.1815 | 1.0 | 0.6667 | 0.8 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.7391 | 0.7083 | 0.7234 | 24 | 0.8 | 0.8333 | 0.8163 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9583 | 0.9583 | 0.9583 | 24 | 0.8621 | 0.8523 | 0.8571 | 0.9836 |
| 0.1258 | 10.0 | 930 | 0.1799 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.5714 | 0.6667 | 0.6154 | 24 | 0.6897 | 0.8333 | 0.7547 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.92 | 0.9583 | 0.9388 | 24 | 0.7653 | 0.8523 | 0.8065 | 0.9805 |
| 0.1088 | 11.0 | 1023 | 0.1498 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.7037 | 0.7917 | 0.7451 | 24 | 0.7586 | 0.9167 | 0.8302 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9583 | 0.9583 | 0.9583 | 24 | 0.8333 | 0.9091 | 0.8696 | 0.9842 |
| 0.0916 | 12.0 | 1116 | 0.1572 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.76 | 0.7917 | 0.7755 | 24 | 0.7241 | 0.875 | 0.7925 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.8519 | 0.9583 | 0.9020 | 24 | 0.8144 | 0.8977 | 0.8541 | 0.9805 |
| 0.0821 | 13.0 | 1209 | 0.1763 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.7391 | 0.7083 | 0.7234 | 24 | 0.7692 | 0.8333 | 0.8 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9545 | 0.875 | 0.9130 | 24 | 0.8506 | 0.8409 | 0.8457 | 0.9812 |
| 0.0733 | 14.0 | 1302 | 0.1632 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.6538 | 0.7083 | 0.68 | 24 | 0.6452 | 0.8333 | 0.7273 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9565 | 0.9167 | 0.9362 | 24 | 0.7812 | 0.8523 | 0.8152 | 0.9757 |
| 0.0691 | 15.0 | 1395 | 0.1536 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.75 | 0.75 | 0.75 | 24 | 0.7692 | 0.8333 | 0.8 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.88 | 0.9167 | 0.8980 | 24 | 0.8352 | 0.8636 | 0.8492 | 0.9812 |
| 0.063 | 16.0 | 1488 | 0.1420 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.7391 | 0.7083 | 0.7234 | 24 | 0.8519 | 0.9583 | 0.9020 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9565 | 0.9167 | 0.9362 | 24 | 0.8764 | 0.8864 | 0.8814 | 0.9842 |
| 0.0565 | 17.0 | 1581 | 0.2375 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.7647 | 0.5417 | 0.6341 | 24 | 0.7727 | 0.7083 | 0.7391 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9565 | 0.9167 | 0.9362 | 24 | 0.8718 | 0.7727 | 0.8193 | 0.9775 |
| 0.0567 | 18.0 | 1674 | 0.1838 | 0.75 | 1.0 | 0.8571 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.75 | 0.5 | 0.6 | 24 | 0.7407 | 0.8333 | 0.7843 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9583 | 0.9583 | 0.9583 | 24 | 0.8452 | 0.8068 | 0.8256 | 0.9775 |
| 0.0515 | 19.0 | 1767 | 0.1360 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.6538 | 0.7083 | 0.68 | 24 | 0.8077 | 0.875 | 0.8400 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9583 | 0.9583 | 0.9583 | 24 | 0.8370 | 0.875 | 0.8556 | 0.9830 |
| 0.0484 | 20.0 | 1860 | 0.1505 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.7391 | 0.7083 | 0.7234 | 24 | 0.875 | 0.875 | 0.875 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9545 | 0.875 | 0.9130 | 24 | 0.8824 | 0.8523 | 0.8671 | 0.9842 |
| 0.0444 | 21.0 | 1953 | 0.1718 | 0.75 | 1.0 | 0.8571 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.6 | 0.625 | 0.6122 | 24 | 0.7407 | 0.8333 | 0.7843 | 24 | 0.8889 | 1.0 | 0.9412 | 8 | 0.9565 | 0.9167 | 0.9362 | 24 | 0.7849 | 0.8295 | 0.8066 | 0.9787 |
| 0.0449 | 22.0 | 2046 | 0.1626 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.7727 | 0.7083 | 0.7391 | 24 | 0.84 | 0.875 | 0.8571 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9167 | 0.9167 | 0.9167 | 24 | 0.8736 | 0.8636 | 0.8686 | 0.9812 |
| 0.0355 | 23.0 | 2139 | 0.1532 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.8095 | 0.7083 | 0.7556 | 24 | 0.8462 | 0.9167 | 0.8800 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9167 | 0.9167 | 0.9167 | 24 | 0.8851 | 0.875 | 0.8800 | 0.9824 |
| 0.0356 | 24.0 | 2232 | 0.1612 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.7391 | 0.7083 | 0.7234 | 24 | 0.84 | 0.875 | 0.8571 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9545 | 0.875 | 0.9130 | 24 | 0.8721 | 0.8523 | 0.8621 | 0.9830 |
| 0.0332 | 25.0 | 2325 | 0.1237 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.7391 | 0.7083 | 0.7234 | 24 | 0.8846 | 0.9583 | 0.9200 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.92 | 0.9583 | 0.9388 | 24 | 0.8778 | 0.8977 | 0.8876 | 0.9848 |
| 0.029 | 26.0 | 2418 | 0.1259 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.7083 | 0.7083 | 0.7083 | 24 | 0.88 | 0.9167 | 0.8980 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9545 | 0.875 | 0.9130 | 24 | 0.8736 | 0.8636 | 0.8686 | 0.9860 |
| 0.0272 | 27.0 | 2511 | 0.1316 | 0.75 | 1.0 | 0.8571 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.75 | 0.75 | 0.75 | 24 | 0.8214 | 0.9583 | 0.8846 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.92 | 0.9583 | 0.9388 | 24 | 0.8511 | 0.9091 | 0.8791 | 0.9799 |
| 0.0265 | 28.0 | 2604 | 0.1369 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.8095 | 0.7083 | 0.7556 | 24 | 0.7931 | 0.9583 | 0.8679 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9565 | 0.9167 | 0.9362 | 24 | 0.8764 | 0.8864 | 0.8814 | 0.9830 |
| 0.0271 | 29.0 | 2697 | 0.1078 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.7143 | 0.8333 | 0.7692 | 24 | 0.8 | 0.8333 | 0.8163 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.9583 | 0.9583 | 0.9583 | 24 | 0.8495 | 0.8977 | 0.8729 | 0.9848 |
| 0.0219 | 30.0 | 2790 | 0.1322 | 1.0 | 1.0 | 1.0 | 3 | 1.0 | 1.0 | 1.0 | 5 | 0.7917 | 0.7917 | 0.7917 | 24 | 0.8462 | 0.9167 | 0.8800 | 24 | 1.0 | 1.0 | 1.0 | 8 | 0.92 | 0.9583 | 0.9388 | 24 | 0.8791 | 0.9091 | 0.8939 | 0.9836 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Tritkoman/autotrain-gahhaha-1478754178
|
Tritkoman
| 2022-09-16T06:11:41Z | 85 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"translation",
"es",
"en",
"dataset:Tritkoman/autotrain-data-gahhaha",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-09-16T05:42:56Z |
---
tags:
- autotrain
- translation
language:
- es
- en
datasets:
- Tritkoman/autotrain-data-gahhaha
co2_eq_emissions:
emissions: 39.86630127427062
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 1478754178
- CO2 Emissions (in grams): 39.8663
## Validation Metrics
- Loss: 1.716
- SacreBLEU: 9.095
- Gen len: 11.146
|
fatimaseemab/wav2vec2-urdu
|
fatimaseemab
| 2022-09-16T05:51:23Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-09-16T05:09:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-urdu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-urdu
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.22.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SALT-NLP/pfadapter-bert-base-uncased-stsb-combined-value
|
SALT-NLP
| 2022-09-16T04:48:01Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"en",
"dataset:glue",
"region:us"
] | null | 2022-09-16T04:47:54Z |
---
tags:
- bert
- adapter-transformers
datasets:
- glue
language:
- en
---
# Adapter `SALT-NLP/pfadapter-bert-base-uncased-stsb-combined-value` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [glue](https://huggingface.co/datasets/glue/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("SALT-NLP/pfadapter-bert-base-uncased-stsb-combined-value", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
microsoft/layoutlmv2-base-uncased
|
microsoft
| 2022-09-16T03:40:56Z | 693,838 | 62 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv2",
"en",
"arxiv:2012.14740",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
license: cc-by-nc-sa-4.0
---
# LayoutLMv2
**Multimodal (text + layout/format + image) pre-training for document AI**
The documentation of this model in the Transformers library can be found [here](https://huggingface.co/docs/transformers/model_doc/layoutlmv2).
[Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/) | [GitHub](https://github.com/microsoft/unilm/tree/master/layoutlmv2)
## Introduction
LayoutLMv2 is an improved version of LayoutLM with new pre-training tasks to model the interaction among text, layout, and image in a single multi-modal framework. It outperforms strong baselines and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including , including FUNSD (0.7895 → 0.8420), CORD (0.9493 → 0.9601), SROIE (0.9524 → 0.9781), Kleister-NDA (0.834 → 0.852), RVL-CDIP (0.9443 → 0.9564), and DocVQA (0.7295 → 0.8672).
[LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740)
Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou, ACL 2021
|
HYPJUDY/layoutlmv3-large-finetuned-funsd
|
HYPJUDY
| 2022-09-16T03:18:44Z | 170 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"arxiv:2204.08387",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-18T18:06:30Z |
---
license: cc-by-nc-sa-4.0
---
# layoutlmv3-large-finetuned-funsd
The model [layoutlmv3-large-finetuned-funsd](https://huggingface.co/HYPJUDY/layoutlmv3-large-finetuned-funsd) is fine-tuned on the FUNSD dataset initialized from [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large).
This finetuned model achieves an F1 score of 92.15 on the test split of the FUNSD dataset.
[Paper](https://arxiv.org/pdf/2204.08387.pdf) | [Code](https://aka.ms/layoutlmv3) | [Microsoft Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)
If you find LayoutLMv3 helpful, please cite the following paper:
```
@inproceedings{huang2022layoutlmv3,
author={Yupan Huang and Tengchao Lv and Lei Cui and Yutong Lu and Furu Wei},
title={LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking},
booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
year={2022}
}
```
## License
The content of this project itself is licensed under the [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/).
Portions of the source code are based on the [transformers](https://github.com/huggingface/transformers) project.
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct)
|
sd-concepts-library/wayne-reynolds-character
|
sd-concepts-library
| 2022-09-16T03:10:09Z | 0 | 5 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-16T03:10:03Z |
---
license: mit
---
### Wayne Reynolds Character on Stable Diffusion
This is the `<warcharport>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:


























|
sd-concepts-library/ganyu-genshin-impact
|
sd-concepts-library
| 2022-09-16T02:54:13Z | 0 | 22 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-16T02:54:10Z |
---
license: mit
---
### Ganyu (Genshin Impact) on Stable Diffusion
This is the `<ganyu>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
mikedodge/t5-small-finetuned-xsum
|
mikedodge
| 2022-09-16T02:23:09Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-15T20:00:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.2804
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4789
- Rouge1: 28.2804
- Rouge2: 7.7039
- Rougel: 22.2002
- Rougelsum: 22.2019
- Gen Len: 18.8238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.711 | 1.0 | 12753 | 2.4789 | 28.2804 | 7.7039 | 22.2002 | 22.2019 | 18.8238 |
### Framework versions
- Transformers 4.22.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sd-concepts-library/milady
|
sd-concepts-library
| 2022-09-16T01:59:10Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-16T01:58:59Z |
---
license: mit
---
### milady on Stable Diffusion
This is the `<milady>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
sd-concepts-library/hydrasuit
|
sd-concepts-library
| 2022-09-16T01:50:23Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-16T01:50:17Z |
---
license: mit
---
### Hydrasuit on Stable Diffusion
This is the `<hydrasuit>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
sd-concepts-library/luinv2
|
sd-concepts-library
| 2022-09-16T01:04:43Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-16T01:04:31Z |
---
license: mit
---
### luinv2 on Stable Diffusion
This is the `<luin-waifu>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
sd-concepts-library/csgo-awp-texture-map
|
sd-concepts-library
| 2022-09-16T00:32:03Z | 0 | 3 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-16T00:31:57Z |
---
license: mit
---
### csgo_awp_texture_map on Stable Diffusion
This is the `<csgo_awp_texture>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
rajistics/donut-base-sroiev2
|
rajistics
| 2022-09-15T23:44:13Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-09-15T23:08:07Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroiev2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroiev2
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Isaacp/xlm-roberta-base-finetuned-panx-en
|
Isaacp
| 2022-09-15T23:30:58Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-15T23:10:20Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.7032474804031354
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3932
- F1: 0.7032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1504 | 1.0 | 50 | 0.5992 | 0.4786 |
| 0.5147 | 2.0 | 100 | 0.4307 | 0.6468 |
| 0.3717 | 3.0 | 150 | 0.3932 | 0.7032 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Isaacp/xlm-roberta-base-finetuned-panx-fr
|
Isaacp
| 2022-09-15T22:48:39Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-15T22:25:15Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8299296953465015
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2848
- F1: 0.8299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5989 | 1.0 | 191 | 0.3383 | 0.7928 |
| 0.2617 | 2.0 | 382 | 0.2966 | 0.8318 |
| 0.1672 | 3.0 | 573 | 0.2848 | 0.8299 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
sd-concepts-library/a-hat-kid
|
sd-concepts-library
| 2022-09-15T22:03:52Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-15T22:03:46Z |
---
license: mit
---
### A Hat kid on Stable Diffusion
This is the `<hatintime-kid>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
sd-concepts-library/backrooms
|
sd-concepts-library
| 2022-09-15T21:32:42Z | 0 | 12 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-15T21:32:37Z |
---
license: mit
---
### Backrooms on Stable Diffusion
This is the `<Backrooms>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:



|
JImenezDaniel88/distResume-Classification-parser
|
JImenezDaniel88
| 2022-09-15T19:47:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-09-15T18:32:09Z |
# YaleParser Resumes Classification
**YaleParser** is a python tool for NLP classification Task and generate databases with this classification. This model is a fineting on named-entity-recognition and zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI and Bayesian weigths to construct hypothesis from each candidate label, and stepwise with regex, build a Database.
### Design
```
predict_single('''08/1992-05/1996 BA, Biology, West Virginia University, Morgantown, WV''')
# 'Education'
```
precision recall f1-score support
Administrative Position 0.73 0.73 0.73 49
Appointments 0.73 0.84 0.79 115
Bibliography 0.94 0.83 0.88 87
Board Certification 0.94 0.77 0.85 44
Education 0.86 0.86 0.86 100
Grants/Clinical Trials 0.94 0.85 0.89 40
Other 0.69 0.77 0.73 156
Patents 0.98 0.98 0.98 43
Professional Honors 0.80 0.85 0.82 170
Professional Service 0.85 0.61 0.71 85
accuracy 0.81 889
macro avg 0.85 0.81 0.82 889
weighted avg 0.82 0.81 0.81 889
|
gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v2
|
gary109
| 2022-09-15T18:55:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-07-16T01:44:48Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v2
This model is a fine-tuned version of [gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v2](https://huggingface.co/gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v2) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4660
- Wer: 0.2274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 500.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4528 | 1.0 | 72 | 0.4860 | 0.2236 |
| 0.4403 | 2.0 | 144 | 0.4814 | 0.2222 |
| 0.4309 | 3.0 | 216 | 0.4952 | 0.2238 |
| 0.4193 | 4.0 | 288 | 0.4864 | 0.2190 |
| 0.427 | 5.0 | 360 | 0.5071 | 0.2261 |
| 0.4342 | 6.0 | 432 | 0.4932 | 0.2218 |
| 0.4205 | 7.0 | 504 | 0.4869 | 0.2222 |
| 0.437 | 8.0 | 576 | 0.5125 | 0.2224 |
| 0.4316 | 9.0 | 648 | 0.5095 | 0.2285 |
| 0.4383 | 10.0 | 720 | 0.5398 | 0.2346 |
| 0.4431 | 11.0 | 792 | 0.5177 | 0.2259 |
| 0.4555 | 12.0 | 864 | 0.5246 | 0.2335 |
| 0.4488 | 13.0 | 936 | 0.5248 | 0.2277 |
| 0.4449 | 14.0 | 1008 | 0.5196 | 0.2254 |
| 0.4629 | 15.0 | 1080 | 0.4933 | 0.2297 |
| 0.4565 | 16.0 | 1152 | 0.5469 | 0.2297 |
| 0.4396 | 17.0 | 1224 | 0.5356 | 0.2439 |
| 0.4452 | 18.0 | 1296 | 0.5298 | 0.2510 |
| 0.4449 | 19.0 | 1368 | 0.5024 | 0.2291 |
| 0.4437 | 20.0 | 1440 | 0.5288 | 0.2374 |
| 0.4572 | 21.0 | 1512 | 0.4954 | 0.2344 |
| 0.4633 | 22.0 | 1584 | 0.5043 | 0.2361 |
| 0.4486 | 23.0 | 1656 | 0.5076 | 0.2250 |
| 0.4386 | 24.0 | 1728 | 0.5564 | 0.2492 |
| 0.4478 | 25.0 | 1800 | 0.5299 | 0.2236 |
| 0.4654 | 26.0 | 1872 | 0.5076 | 0.2276 |
| 0.453 | 27.0 | 1944 | 0.5666 | 0.2395 |
| 0.4474 | 28.0 | 2016 | 0.5026 | 0.2254 |
| 0.4465 | 29.0 | 2088 | 0.5216 | 0.2352 |
| 0.4689 | 30.0 | 2160 | 0.5293 | 0.2370 |
| 0.4467 | 31.0 | 2232 | 0.4856 | 0.2303 |
| 0.4379 | 32.0 | 2304 | 0.5089 | 0.2240 |
| 0.4302 | 33.0 | 2376 | 0.4958 | 0.2173 |
| 0.4417 | 34.0 | 2448 | 0.5392 | 0.2337 |
| 0.4458 | 35.0 | 2520 | 0.5229 | 0.2416 |
| 0.4415 | 36.0 | 2592 | 0.5280 | 0.2344 |
| 0.4621 | 37.0 | 2664 | 0.5362 | 0.2459 |
| 0.44 | 38.0 | 2736 | 0.5071 | 0.2285 |
| 0.4288 | 39.0 | 2808 | 0.5264 | 0.2313 |
| 0.4594 | 40.0 | 2880 | 0.5238 | 0.2306 |
| 0.4428 | 41.0 | 2952 | 0.5375 | 0.2286 |
| 0.4233 | 42.0 | 3024 | 0.5214 | 0.2254 |
| 0.4462 | 43.0 | 3096 | 0.5145 | 0.2450 |
| 0.4282 | 44.0 | 3168 | 0.5519 | 0.2254 |
| 0.454 | 45.0 | 3240 | 0.5401 | 0.2382 |
| 0.4494 | 46.0 | 3312 | 0.5117 | 0.2229 |
| 0.4292 | 47.0 | 3384 | 0.5295 | 0.2352 |
| 0.4321 | 48.0 | 3456 | 0.4953 | 0.2299 |
| 0.4145 | 49.0 | 3528 | 0.5233 | 0.2297 |
| 0.4278 | 50.0 | 3600 | 0.5151 | 0.2258 |
| 0.4395 | 51.0 | 3672 | 0.4660 | 0.2274 |
| 0.4298 | 52.0 | 3744 | 0.5083 | 0.2409 |
| 0.4279 | 53.0 | 3816 | 0.4855 | 0.2219 |
| 0.4164 | 54.0 | 3888 | 0.5074 | 0.2267 |
| 0.4386 | 55.0 | 3960 | 0.5016 | 0.2241 |
| 0.4497 | 56.0 | 4032 | 0.5378 | 0.2305 |
| 0.4267 | 57.0 | 4104 | 0.5199 | 0.2344 |
| 0.4083 | 58.0 | 4176 | 0.5134 | 0.2249 |
| 0.4163 | 59.0 | 4248 | 0.4975 | 0.2316 |
| 0.4271 | 60.0 | 4320 | 0.5298 | 0.2291 |
| 0.43 | 61.0 | 4392 | 0.4991 | 0.2289 |
| 0.437 | 62.0 | 4464 | 0.5154 | 0.2298 |
| 0.415 | 63.0 | 4536 | 0.5167 | 0.2224 |
| 0.4308 | 64.0 | 4608 | 0.5324 | 0.2287 |
| 0.4247 | 65.0 | 4680 | 0.5396 | 0.2224 |
| 0.4076 | 66.0 | 4752 | 0.5354 | 0.2274 |
| 0.4196 | 67.0 | 4824 | 0.5523 | 0.2225 |
| 0.4216 | 68.0 | 4896 | 0.5180 | 0.2166 |
| 0.4132 | 69.0 | 4968 | 0.5111 | 0.2212 |
| 0.4306 | 70.0 | 5040 | 0.5534 | 0.2416 |
| 0.4327 | 71.0 | 5112 | 0.5628 | 0.2473 |
| 0.4301 | 72.0 | 5184 | 0.5216 | 0.2252 |
| 0.4328 | 73.0 | 5256 | 0.5154 | 0.2250 |
| 0.4021 | 74.0 | 5328 | 0.5686 | 0.2245 |
| 0.465 | 75.0 | 5400 | 0.5236 | 0.2419 |
| 0.416 | 76.0 | 5472 | 0.5614 | 0.2365 |
| 0.4337 | 77.0 | 5544 | 0.5275 | 0.2302 |
| 0.4157 | 78.0 | 5616 | 0.5126 | 0.2293 |
| 0.4143 | 79.0 | 5688 | 0.5260 | 0.2376 |
| 0.4174 | 80.0 | 5760 | 0.5254 | 0.2317 |
| 0.4174 | 81.0 | 5832 | 0.4971 | 0.2191 |
| 0.4082 | 82.0 | 5904 | 0.5245 | 0.2320 |
| 0.4263 | 83.0 | 5976 | 0.5692 | 0.2401 |
| 0.4164 | 84.0 | 6048 | 0.5209 | 0.2312 |
| 0.4144 | 85.0 | 6120 | 0.5164 | 0.2340 |
| 0.4189 | 86.0 | 6192 | 0.5545 | 0.2459 |
| 0.4311 | 87.0 | 6264 | 0.5349 | 0.2477 |
| 0.4224 | 88.0 | 6336 | 0.5093 | 0.2375 |
| 0.4069 | 89.0 | 6408 | 0.5664 | 0.2443 |
| 0.4082 | 90.0 | 6480 | 0.5426 | 0.2391 |
| 0.411 | 91.0 | 6552 | 0.5219 | 0.2339 |
| 0.4085 | 92.0 | 6624 | 0.5468 | 0.2360 |
| 0.4012 | 93.0 | 6696 | 0.5514 | 0.2526 |
| 0.3863 | 94.0 | 6768 | 0.5440 | 0.2344 |
| 0.4098 | 95.0 | 6840 | 0.5355 | 0.2362 |
| 0.4136 | 96.0 | 6912 | 0.5400 | 0.2409 |
| 0.4066 | 97.0 | 6984 | 0.5117 | 0.2313 |
| 0.4131 | 98.0 | 7056 | 0.5365 | 0.2375 |
| 0.3852 | 99.0 | 7128 | 0.5172 | 0.2326 |
| 0.3935 | 100.0 | 7200 | 0.5085 | 0.2296 |
| 0.4093 | 101.0 | 7272 | 0.5650 | 0.2525 |
| 0.3938 | 102.0 | 7344 | 0.5246 | 0.2324 |
| 0.4016 | 103.0 | 7416 | 0.5084 | 0.2292 |
| 0.412 | 104.0 | 7488 | 0.5308 | 0.2211 |
| 0.3903 | 105.0 | 7560 | 0.5047 | 0.2201 |
| 0.396 | 106.0 | 7632 | 0.5302 | 0.2223 |
| 0.3891 | 107.0 | 7704 | 0.5367 | 0.2222 |
| 0.3886 | 108.0 | 7776 | 0.5459 | 0.2328 |
| 0.379 | 109.0 | 7848 | 0.5486 | 0.2340 |
| 0.4009 | 110.0 | 7920 | 0.5080 | 0.2186 |
| 0.3967 | 111.0 | 7992 | 0.5389 | 0.2193 |
| 0.3988 | 112.0 | 8064 | 0.5488 | 0.2281 |
| 0.3952 | 113.0 | 8136 | 0.5409 | 0.2294 |
| 0.3884 | 114.0 | 8208 | 0.5304 | 0.2326 |
| 0.3939 | 115.0 | 8280 | 0.5542 | 0.2211 |
| 0.3927 | 116.0 | 8352 | 0.5676 | 0.2259 |
| 0.3944 | 117.0 | 8424 | 0.5221 | 0.2210 |
| 0.3941 | 118.0 | 8496 | 0.5474 | 0.2247 |
| 0.3912 | 119.0 | 8568 | 0.5451 | 0.2185 |
| 0.4209 | 120.0 | 8640 | 0.5282 | 0.2282 |
| 0.3882 | 121.0 | 8712 | 0.5263 | 0.2184 |
| 0.3891 | 122.0 | 8784 | 0.5301 | 0.2194 |
| 0.3964 | 123.0 | 8856 | 0.5608 | 0.2220 |
| 0.3918 | 124.0 | 8928 | 0.5233 | 0.2230 |
| 0.3834 | 125.0 | 9000 | 0.5286 | 0.2195 |
| 0.3952 | 126.0 | 9072 | 0.5410 | 0.2258 |
| 0.3812 | 127.0 | 9144 | 0.5183 | 0.2207 |
| 0.3904 | 128.0 | 9216 | 0.5393 | 0.2244 |
| 0.3797 | 129.0 | 9288 | 0.5213 | 0.2226 |
| 0.3802 | 130.0 | 9360 | 0.5470 | 0.2207 |
| 0.4097 | 131.0 | 9432 | 0.5206 | 0.2254 |
| 0.3771 | 132.0 | 9504 | 0.5075 | 0.2182 |
| 0.3732 | 133.0 | 9576 | 0.5153 | 0.2255 |
| 0.3727 | 134.0 | 9648 | 0.5107 | 0.2212 |
| 0.3751 | 135.0 | 9720 | 0.5147 | 0.2259 |
| 0.3858 | 136.0 | 9792 | 0.5519 | 0.2220 |
| 0.3889 | 137.0 | 9864 | 0.5606 | 0.2222 |
| 0.3916 | 138.0 | 9936 | 0.5401 | 0.2252 |
| 0.3775 | 139.0 | 10008 | 0.5393 | 0.2269 |
| 0.3963 | 140.0 | 10080 | 0.5504 | 0.2322 |
| 0.3941 | 141.0 | 10152 | 0.5338 | 0.2342 |
| 0.3801 | 142.0 | 10224 | 0.5115 | 0.2276 |
| 0.3809 | 143.0 | 10296 | 0.4966 | 0.2261 |
| 0.3751 | 144.0 | 10368 | 0.4910 | 0.2240 |
| 0.3827 | 145.0 | 10440 | 0.5291 | 0.2204 |
| 0.384 | 146.0 | 10512 | 0.5702 | 0.2278 |
| 0.3728 | 147.0 | 10584 | 0.5340 | 0.2283 |
| 0.3963 | 148.0 | 10656 | 0.5513 | 0.2286 |
| 0.3802 | 149.0 | 10728 | 0.5424 | 0.2264 |
| 0.3874 | 150.0 | 10800 | 0.5219 | 0.2200 |
| 0.3743 | 151.0 | 10872 | 0.5147 | 0.2161 |
| 0.3931 | 152.0 | 10944 | 0.5318 | 0.2324 |
| 0.3755 | 153.0 | 11016 | 0.5457 | 0.2252 |
| 0.3744 | 154.0 | 11088 | 0.5448 | 0.2260 |
| 0.3799 | 155.0 | 11160 | 0.5276 | 0.2171 |
| 0.3953 | 156.0 | 11232 | 0.5546 | 0.2263 |
| 0.3716 | 157.0 | 11304 | 0.5110 | 0.2246 |
| 0.3725 | 158.0 | 11376 | 0.5385 | 0.2193 |
| 0.364 | 159.0 | 11448 | 0.5114 | 0.2216 |
| 0.3666 | 160.0 | 11520 | 0.5584 | 0.2248 |
| 0.3797 | 161.0 | 11592 | 0.5313 | 0.2238 |
| 0.3704 | 162.0 | 11664 | 0.5542 | 0.2281 |
| 0.362 | 163.0 | 11736 | 0.5674 | 0.2241 |
| 0.3551 | 164.0 | 11808 | 0.5484 | 0.2210 |
| 0.3765 | 165.0 | 11880 | 0.5380 | 0.2252 |
| 0.3821 | 166.0 | 11952 | 0.5441 | 0.2267 |
| 0.3608 | 167.0 | 12024 | 0.4983 | 0.2186 |
| 0.3595 | 168.0 | 12096 | 0.5065 | 0.2166 |
| 0.3652 | 169.0 | 12168 | 0.5211 | 0.2150 |
| 0.3635 | 170.0 | 12240 | 0.5341 | 0.2164 |
| 0.3614 | 171.0 | 12312 | 0.5059 | 0.2183 |
| 0.3522 | 172.0 | 12384 | 0.5530 | 0.2199 |
| 0.3522 | 173.0 | 12456 | 0.5581 | 0.2142 |
| 0.3503 | 174.0 | 12528 | 0.5394 | 0.2211 |
| 0.3583 | 175.0 | 12600 | 0.5460 | 0.2252 |
| 0.3562 | 176.0 | 12672 | 0.5199 | 0.2223 |
| 0.351 | 177.0 | 12744 | 0.5248 | 0.2146 |
| 0.3667 | 178.0 | 12816 | 0.5400 | 0.2169 |
| 0.3407 | 179.0 | 12888 | 0.5349 | 0.2095 |
| 0.3563 | 180.0 | 12960 | 0.5259 | 0.2116 |
| 0.3656 | 181.0 | 13032 | 0.5130 | 0.2115 |
| 0.3714 | 182.0 | 13104 | 0.5071 | 0.2151 |
| 0.3565 | 183.0 | 13176 | 0.5419 | 0.2205 |
| 0.3521 | 184.0 | 13248 | 0.5380 | 0.2250 |
| 0.3605 | 185.0 | 13320 | 0.5437 | 0.2230 |
| 0.3508 | 186.0 | 13392 | 0.5391 | 0.2225 |
| 0.3746 | 187.0 | 13464 | 0.5426 | 0.2274 |
| 0.3478 | 188.0 | 13536 | 0.5824 | 0.2247 |
| 0.3475 | 189.0 | 13608 | 0.5233 | 0.2103 |
| 0.3676 | 190.0 | 13680 | 0.5214 | 0.2122 |
| 0.3579 | 191.0 | 13752 | 0.5267 | 0.2124 |
| 0.3563 | 192.0 | 13824 | 0.5343 | 0.2132 |
| 0.3531 | 193.0 | 13896 | 0.5205 | 0.2205 |
| 0.3424 | 194.0 | 13968 | 0.5196 | 0.2196 |
| 0.3617 | 195.0 | 14040 | 0.5302 | 0.2222 |
| 0.3461 | 196.0 | 14112 | 0.5366 | 0.2204 |
| 0.3524 | 197.0 | 14184 | 0.5383 | 0.2212 |
| 0.3354 | 198.0 | 14256 | 0.5279 | 0.2166 |
| 0.3501 | 199.0 | 14328 | 0.5235 | 0.2165 |
| 0.3384 | 200.0 | 14400 | 0.5330 | 0.2152 |
| 0.3565 | 201.0 | 14472 | 0.5262 | 0.2211 |
| 0.3385 | 202.0 | 14544 | 0.5404 | 0.2173 |
| 0.3533 | 203.0 | 14616 | 0.5465 | 0.2209 |
| 0.3503 | 204.0 | 14688 | 0.5243 | 0.2223 |
| 0.3529 | 205.0 | 14760 | 0.5611 | 0.2276 |
| 0.3555 | 206.0 | 14832 | 0.5437 | 0.2209 |
| 0.3548 | 207.0 | 14904 | 0.5401 | 0.2249 |
| 0.3417 | 208.0 | 14976 | 0.5643 | 0.2304 |
| 0.3271 | 209.0 | 15048 | 0.5356 | 0.2183 |
| 0.344 | 210.0 | 15120 | 0.5300 | 0.2173 |
| 0.3416 | 211.0 | 15192 | 0.5343 | 0.2169 |
| 0.3393 | 212.0 | 15264 | 0.5677 | 0.2206 |
| 0.3356 | 213.0 | 15336 | 0.5514 | 0.2194 |
| 0.3344 | 214.0 | 15408 | 0.5527 | 0.2198 |
| 0.3303 | 215.0 | 15480 | 0.5590 | 0.2146 |
| 0.3503 | 216.0 | 15552 | 0.5681 | 0.2242 |
| 0.339 | 217.0 | 15624 | 0.5318 | 0.2186 |
| 0.3361 | 218.0 | 15696 | 0.5369 | 0.2247 |
| 0.334 | 219.0 | 15768 | 0.5173 | 0.2152 |
| 0.3222 | 220.0 | 15840 | 0.5965 | 0.2236 |
| 0.3247 | 221.0 | 15912 | 0.5543 | 0.2165 |
| 0.338 | 222.0 | 15984 | 0.5836 | 0.2178 |
| 0.3112 | 223.0 | 16056 | 0.5573 | 0.2171 |
| 0.3203 | 224.0 | 16128 | 0.5830 | 0.2196 |
| 0.3294 | 225.0 | 16200 | 0.5815 | 0.2198 |
| 0.3392 | 226.0 | 16272 | 0.5641 | 0.2163 |
| 0.3332 | 227.0 | 16344 | 0.5770 | 0.2204 |
| 0.3365 | 228.0 | 16416 | 0.5843 | 0.2181 |
| 0.3186 | 229.0 | 16488 | 0.5835 | 0.2231 |
| 0.3329 | 230.0 | 16560 | 0.5867 | 0.2220 |
| 0.3257 | 231.0 | 16632 | 0.6081 | 0.2196 |
| 0.3183 | 232.0 | 16704 | 0.5944 | 0.2220 |
| 0.3315 | 233.0 | 16776 | 0.6060 | 0.2222 |
| 0.3269 | 234.0 | 16848 | 0.6268 | 0.2260 |
| 0.3191 | 235.0 | 16920 | 0.5796 | 0.2183 |
| 0.3395 | 236.0 | 16992 | 0.6140 | 0.2257 |
| 0.3186 | 237.0 | 17064 | 0.6302 | 0.2277 |
| 0.3264 | 238.0 | 17136 | 0.5752 | 0.2194 |
| 0.3181 | 239.0 | 17208 | 0.6066 | 0.2196 |
| 0.3201 | 240.0 | 17280 | 0.6013 | 0.2223 |
| 0.3242 | 241.0 | 17352 | 0.5960 | 0.2207 |
| 0.3194 | 242.0 | 17424 | 0.6093 | 0.2311 |
| 0.3203 | 243.0 | 17496 | 0.6047 | 0.2281 |
| 0.3173 | 244.0 | 17568 | 0.6260 | 0.2285 |
| 0.3118 | 245.0 | 17640 | 0.5961 | 0.2243 |
| 0.3172 | 246.0 | 17712 | 0.6315 | 0.2242 |
| 0.332 | 247.0 | 17784 | 0.6413 | 0.2250 |
| 0.3315 | 248.0 | 17856 | 0.6260 | 0.2290 |
| 0.3222 | 249.0 | 17928 | 0.6175 | 0.2307 |
| 0.3291 | 250.0 | 18000 | 0.6005 | 0.2283 |
| 0.3321 | 251.0 | 18072 | 0.6299 | 0.2311 |
| 0.3338 | 252.0 | 18144 | 0.6011 | 0.2310 |
| 0.3274 | 253.0 | 18216 | 0.5662 | 0.2203 |
| 0.3148 | 254.0 | 18288 | 0.6139 | 0.2344 |
| 0.3295 | 255.0 | 18360 | 0.6183 | 0.2461 |
| 0.3169 | 256.0 | 18432 | 0.6136 | 0.2283 |
| 0.3431 | 257.0 | 18504 | 0.6445 | 0.2446 |
| 0.3209 | 258.0 | 18576 | 0.6124 | 0.2437 |
| 0.3405 | 259.0 | 18648 | 0.6210 | 0.2446 |
| 0.3317 | 260.0 | 18720 | 0.6088 | 0.2350 |
| 0.3265 | 261.0 | 18792 | 0.5792 | 0.2324 |
| 0.332 | 262.0 | 18864 | 0.6326 | 0.2427 |
| 0.3179 | 263.0 | 18936 | 0.6174 | 0.2256 |
| 0.3119 | 264.0 | 19008 | 0.6338 | 0.2277 |
| 0.3223 | 265.0 | 19080 | 0.6236 | 0.2213 |
| 0.315 | 266.0 | 19152 | 0.6025 | 0.2263 |
| 0.3214 | 267.0 | 19224 | 0.5881 | 0.2243 |
| 0.3184 | 268.0 | 19296 | 0.5942 | 0.2225 |
| 0.3083 | 269.0 | 19368 | 0.5836 | 0.2209 |
| 0.3098 | 270.0 | 19440 | 0.5844 | 0.2192 |
| 0.2992 | 271.0 | 19512 | 0.5972 | 0.2218 |
| 0.3118 | 272.0 | 19584 | 0.5768 | 0.2220 |
| 0.3112 | 273.0 | 19656 | 0.5926 | 0.2167 |
| 0.2994 | 274.0 | 19728 | 0.6056 | 0.2227 |
| 0.3041 | 275.0 | 19800 | 0.5793 | 0.2245 |
| 0.3072 | 276.0 | 19872 | 0.6188 | 0.2277 |
| 0.3042 | 277.0 | 19944 | 0.5931 | 0.2251 |
| 0.3107 | 278.0 | 20016 | 0.6205 | 0.2216 |
| 0.3077 | 279.0 | 20088 | 0.6001 | 0.2209 |
| 0.2903 | 280.0 | 20160 | 0.6002 | 0.2141 |
| 0.3124 | 281.0 | 20232 | 0.5782 | 0.2168 |
| 0.3043 | 282.0 | 20304 | 0.6105 | 0.2187 |
| 0.3007 | 283.0 | 20376 | 0.6105 | 0.2213 |
| 0.3023 | 284.0 | 20448 | 0.6011 | 0.2232 |
| 0.3062 | 285.0 | 20520 | 0.5967 | 0.2195 |
| 0.3093 | 286.0 | 20592 | 0.6571 | 0.2258 |
| 0.3041 | 287.0 | 20664 | 0.5956 | 0.2213 |
| 0.3083 | 288.0 | 20736 | 0.5904 | 0.2253 |
| 0.3037 | 289.0 | 20808 | 0.6096 | 0.2295 |
| 0.3064 | 290.0 | 20880 | 0.5958 | 0.2232 |
| 0.3136 | 291.0 | 20952 | 0.6134 | 0.2250 |
| 0.3042 | 292.0 | 21024 | 0.6144 | 0.2189 |
| 0.2967 | 293.0 | 21096 | 0.6086 | 0.2282 |
| 0.2952 | 294.0 | 21168 | 0.6178 | 0.2285 |
| 0.301 | 295.0 | 21240 | 0.5924 | 0.2189 |
| 0.3058 | 296.0 | 21312 | 0.6032 | 0.2193 |
| 0.2983 | 297.0 | 21384 | 0.5823 | 0.2183 |
| 0.2793 | 298.0 | 21456 | 0.5930 | 0.2195 |
| 0.2936 | 299.0 | 21528 | 0.6166 | 0.2215 |
| 0.298 | 300.0 | 21600 | 0.5864 | 0.2159 |
| 0.2949 | 301.0 | 21672 | 0.6049 | 0.2160 |
| 0.2948 | 302.0 | 21744 | 0.5745 | 0.2173 |
| 0.2809 | 303.0 | 21816 | 0.5699 | 0.2173 |
| 0.2854 | 304.0 | 21888 | 0.5894 | 0.2243 |
| 0.2908 | 305.0 | 21960 | 0.6123 | 0.2229 |
| 0.2948 | 306.0 | 22032 | 0.5966 | 0.2162 |
| 0.2997 | 307.0 | 22104 | 0.6030 | 0.2180 |
| 0.2906 | 308.0 | 22176 | 0.5920 | 0.2185 |
| 0.2778 | 309.0 | 22248 | 0.5913 | 0.2121 |
| 0.281 | 310.0 | 22320 | 0.6020 | 0.2121 |
| 0.2852 | 311.0 | 22392 | 0.5814 | 0.2170 |
| 0.278 | 312.0 | 22464 | 0.5931 | 0.2151 |
| 0.2743 | 313.0 | 22536 | 0.6073 | 0.2179 |
| 0.2757 | 314.0 | 22608 | 0.6174 | 0.2153 |
| 0.2907 | 315.0 | 22680 | 0.5729 | 0.2171 |
| 0.2801 | 316.0 | 22752 | 0.6014 | 0.2214 |
| 0.2908 | 317.0 | 22824 | 0.6098 | 0.2130 |
| 0.2824 | 318.0 | 22896 | 0.5942 | 0.2191 |
| 0.2799 | 319.0 | 22968 | 0.6374 | 0.2230 |
| 0.2725 | 320.0 | 23040 | 0.6424 | 0.2206 |
| 0.2821 | 321.0 | 23112 | 0.6465 | 0.2203 |
| 0.2795 | 322.0 | 23184 | 0.6163 | 0.2182 |
| 0.2764 | 323.0 | 23256 | 0.6257 | 0.2209 |
| 0.2739 | 324.0 | 23328 | 0.6374 | 0.2194 |
| 0.2712 | 325.0 | 23400 | 0.6228 | 0.2166 |
| 0.275 | 326.0 | 23472 | 0.6394 | 0.2214 |
| 0.275 | 327.0 | 23544 | 0.6359 | 0.2213 |
| 0.2702 | 328.0 | 23616 | 0.6430 | 0.2207 |
| 0.2676 | 329.0 | 23688 | 0.6321 | 0.2145 |
| 0.2735 | 330.0 | 23760 | 0.6583 | 0.2168 |
| 0.2815 | 331.0 | 23832 | 0.6368 | 0.2178 |
| 0.2823 | 332.0 | 23904 | 0.6373 | 0.2197 |
| 0.2885 | 333.0 | 23976 | 0.6352 | 0.2200 |
| 0.2751 | 334.0 | 24048 | 0.6431 | 0.2159 |
| 0.2717 | 335.0 | 24120 | 0.6339 | 0.2213 |
| 0.286 | 336.0 | 24192 | 0.6566 | 0.2245 |
| 0.2678 | 337.0 | 24264 | 0.6443 | 0.2194 |
| 0.2692 | 338.0 | 24336 | 0.6352 | 0.2225 |
| 0.273 | 339.0 | 24408 | 0.6497 | 0.2187 |
| 0.2686 | 340.0 | 24480 | 0.6788 | 0.2214 |
| 0.2699 | 341.0 | 24552 | 0.6615 | 0.2198 |
| 0.2636 | 342.0 | 24624 | 0.6765 | 0.2196 |
| 0.2545 | 343.0 | 24696 | 0.6737 | 0.2202 |
| 0.2612 | 344.0 | 24768 | 0.6891 | 0.2240 |
| 0.2705 | 345.0 | 24840 | 0.6550 | 0.2204 |
| 0.2658 | 346.0 | 24912 | 0.6591 | 0.2200 |
| 0.2701 | 347.0 | 24984 | 0.6222 | 0.2216 |
| 0.2743 | 348.0 | 25056 | 0.6263 | 0.2186 |
| 0.2657 | 349.0 | 25128 | 0.6509 | 0.2186 |
| 0.2635 | 350.0 | 25200 | 0.6570 | 0.2207 |
| 0.2601 | 351.0 | 25272 | 0.6496 | 0.2155 |
| 0.2695 | 352.0 | 25344 | 0.6305 | 0.2169 |
| 0.2586 | 353.0 | 25416 | 0.6269 | 0.2223 |
| 0.2529 | 354.0 | 25488 | 0.6418 | 0.2204 |
| 0.2739 | 355.0 | 25560 | 0.6472 | 0.2175 |
| 0.2738 | 356.0 | 25632 | 0.6416 | 0.2187 |
| 0.2775 | 357.0 | 25704 | 0.6470 | 0.2208 |
| 0.2775 | 358.0 | 25776 | 0.6483 | 0.2201 |
| 0.2622 | 359.0 | 25848 | 0.6233 | 0.2164 |
| 0.2727 | 360.0 | 25920 | 0.6438 | 0.2178 |
| 0.275 | 361.0 | 25992 | 0.6459 | 0.2222 |
| 0.2688 | 362.0 | 26064 | 0.6329 | 0.2188 |
| 0.2658 | 363.0 | 26136 | 0.6482 | 0.2207 |
| 0.2693 | 364.0 | 26208 | 0.6337 | 0.2194 |
| 0.2599 | 365.0 | 26280 | 0.6458 | 0.2189 |
| 0.2683 | 366.0 | 26352 | 0.6483 | 0.2213 |
| 0.2665 | 367.0 | 26424 | 0.6576 | 0.2203 |
| 0.2529 | 368.0 | 26496 | 0.6629 | 0.2200 |
| 0.2536 | 369.0 | 26568 | 0.6665 | 0.2208 |
| 0.2562 | 370.0 | 26640 | 0.6545 | 0.2171 |
| 0.2713 | 371.0 | 26712 | 0.6433 | 0.2231 |
| 0.2545 | 372.0 | 26784 | 0.6330 | 0.2202 |
| 0.2513 | 373.0 | 26856 | 0.6474 | 0.2154 |
| 0.2564 | 374.0 | 26928 | 0.6519 | 0.2191 |
| 0.266 | 375.0 | 27000 | 0.6577 | 0.2199 |
| 0.2623 | 376.0 | 27072 | 0.6508 | 0.2187 |
| 0.2666 | 377.0 | 27144 | 0.6358 | 0.2171 |
| 0.2503 | 378.0 | 27216 | 0.6515 | 0.2195 |
| 0.252 | 379.0 | 27288 | 0.6479 | 0.2221 |
| 0.2558 | 380.0 | 27360 | 0.6344 | 0.2203 |
| 0.2673 | 381.0 | 27432 | 0.6717 | 0.2196 |
| 0.2615 | 382.0 | 27504 | 0.6393 | 0.2178 |
| 0.2603 | 383.0 | 27576 | 0.6375 | 0.2167 |
| 0.2522 | 384.0 | 27648 | 0.6381 | 0.2195 |
| 0.2532 | 385.0 | 27720 | 0.6566 | 0.2209 |
| 0.2544 | 386.0 | 27792 | 0.6640 | 0.2231 |
| 0.2529 | 387.0 | 27864 | 0.6531 | 0.2207 |
| 0.2578 | 388.0 | 27936 | 0.6915 | 0.2202 |
| 0.2517 | 389.0 | 28008 | 0.6902 | 0.2238 |
| 0.2453 | 390.0 | 28080 | 0.6727 | 0.2249 |
| 0.2634 | 391.0 | 28152 | 0.6667 | 0.2235 |
| 0.2515 | 392.0 | 28224 | 0.6554 | 0.2212 |
| 0.249 | 393.0 | 28296 | 0.6672 | 0.2214 |
| 0.2524 | 394.0 | 28368 | 0.6693 | 0.2164 |
| 0.2529 | 395.0 | 28440 | 0.6572 | 0.2186 |
| 0.256 | 396.0 | 28512 | 0.6420 | 0.2171 |
| 0.2498 | 397.0 | 28584 | 0.6712 | 0.2168 |
| 0.2565 | 398.0 | 28656 | 0.6890 | 0.2175 |
| 0.2477 | 399.0 | 28728 | 0.6905 | 0.2185 |
| 0.2486 | 400.0 | 28800 | 0.7010 | 0.2191 |
| 0.259 | 401.0 | 28872 | 0.6983 | 0.2169 |
| 0.2555 | 402.0 | 28944 | 0.6877 | 0.2189 |
| 0.2579 | 403.0 | 29016 | 0.6864 | 0.2188 |
| 0.2421 | 404.0 | 29088 | 0.6603 | 0.2175 |
| 0.2531 | 405.0 | 29160 | 0.6882 | 0.2223 |
| 0.254 | 406.0 | 29232 | 0.6813 | 0.2209 |
| 0.2517 | 407.0 | 29304 | 0.6707 | 0.2205 |
| 0.2521 | 408.0 | 29376 | 0.6835 | 0.2234 |
| 0.2494 | 409.0 | 29448 | 0.6896 | 0.2216 |
| 0.2516 | 410.0 | 29520 | 0.6760 | 0.2218 |
| 0.2605 | 411.0 | 29592 | 0.7055 | 0.2207 |
| 0.2514 | 412.0 | 29664 | 0.6707 | 0.2232 |
| 0.242 | 413.0 | 29736 | 0.6853 | 0.2183 |
| 0.2505 | 414.0 | 29808 | 0.6869 | 0.2232 |
| 0.2398 | 415.0 | 29880 | 0.6732 | 0.2228 |
| 0.2549 | 416.0 | 29952 | 0.6559 | 0.2222 |
| 0.2496 | 417.0 | 30024 | 0.6675 | 0.2232 |
| 0.2538 | 418.0 | 30096 | 0.6695 | 0.2240 |
| 0.246 | 419.0 | 30168 | 0.6917 | 0.2268 |
| 0.2462 | 420.0 | 30240 | 0.6842 | 0.2288 |
| 0.2527 | 421.0 | 30312 | 0.6628 | 0.2207 |
| 0.2469 | 422.0 | 30384 | 0.6683 | 0.2225 |
| 0.2493 | 423.0 | 30456 | 0.6632 | 0.2189 |
| 0.239 | 424.0 | 30528 | 0.6848 | 0.2198 |
| 0.2373 | 425.0 | 30600 | 0.6834 | 0.2223 |
| 0.245 | 426.0 | 30672 | 0.6902 | 0.2251 |
| 0.239 | 427.0 | 30744 | 0.6917 | 0.2223 |
| 0.2441 | 428.0 | 30816 | 0.6859 | 0.2232 |
| 0.2306 | 429.0 | 30888 | 0.6844 | 0.2208 |
| 0.2373 | 430.0 | 30960 | 0.6740 | 0.2185 |
| 0.2495 | 431.0 | 31032 | 0.6823 | 0.2214 |
| 0.2457 | 432.0 | 31104 | 0.6686 | 0.2219 |
| 0.2474 | 433.0 | 31176 | 0.6856 | 0.2215 |
| 0.2434 | 434.0 | 31248 | 0.6876 | 0.2199 |
| 0.2377 | 435.0 | 31320 | 0.6827 | 0.2234 |
| 0.2566 | 436.0 | 31392 | 0.6920 | 0.2213 |
| 0.2384 | 437.0 | 31464 | 0.6734 | 0.2234 |
| 0.2477 | 438.0 | 31536 | 0.6992 | 0.2242 |
| 0.2347 | 439.0 | 31608 | 0.6837 | 0.2217 |
| 0.2345 | 440.0 | 31680 | 0.6852 | 0.2222 |
| 0.2457 | 441.0 | 31752 | 0.6891 | 0.2230 |
| 0.2512 | 442.0 | 31824 | 0.6976 | 0.2263 |
| 0.25 | 443.0 | 31896 | 0.6889 | 0.2232 |
| 0.2341 | 444.0 | 31968 | 0.6841 | 0.2266 |
| 0.252 | 445.0 | 32040 | 0.6981 | 0.2249 |
| 0.2486 | 446.0 | 32112 | 0.6958 | 0.2281 |
| 0.2402 | 447.0 | 32184 | 0.6826 | 0.2249 |
| 0.2477 | 448.0 | 32256 | 0.6867 | 0.2247 |
| 0.2304 | 449.0 | 32328 | 0.7022 | 0.2243 |
| 0.2376 | 450.0 | 32400 | 0.6948 | 0.2222 |
| 0.2388 | 451.0 | 32472 | 0.6771 | 0.2221 |
| 0.2544 | 452.0 | 32544 | 0.6841 | 0.2249 |
| 0.2428 | 453.0 | 32616 | 0.6886 | 0.2220 |
| 0.2438 | 454.0 | 32688 | 0.6903 | 0.2214 |
| 0.2463 | 455.0 | 32760 | 0.6781 | 0.2219 |
| 0.2355 | 456.0 | 32832 | 0.6784 | 0.2198 |
| 0.237 | 457.0 | 32904 | 0.6849 | 0.2231 |
| 0.2381 | 458.0 | 32976 | 0.6892 | 0.2220 |
| 0.23 | 459.0 | 33048 | 0.6782 | 0.2207 |
| 0.2359 | 460.0 | 33120 | 0.6789 | 0.2238 |
| 0.2382 | 461.0 | 33192 | 0.6829 | 0.2236 |
| 0.2438 | 462.0 | 33264 | 0.6928 | 0.2236 |
| 0.233 | 463.0 | 33336 | 0.6860 | 0.2216 |
| 0.2358 | 464.0 | 33408 | 0.6857 | 0.2236 |
| 0.2226 | 465.0 | 33480 | 0.6818 | 0.2202 |
| 0.2478 | 466.0 | 33552 | 0.6801 | 0.2222 |
| 0.2274 | 467.0 | 33624 | 0.6797 | 0.2203 |
| 0.2339 | 468.0 | 33696 | 0.6915 | 0.2224 |
| 0.2259 | 469.0 | 33768 | 0.6919 | 0.2220 |
| 0.2327 | 470.0 | 33840 | 0.6877 | 0.2225 |
| 0.2341 | 471.0 | 33912 | 0.6892 | 0.2235 |
| 0.2502 | 472.0 | 33984 | 0.6900 | 0.2227 |
| 0.234 | 473.0 | 34056 | 0.6839 | 0.2242 |
| 0.2289 | 474.0 | 34128 | 0.6885 | 0.2243 |
| 0.2311 | 475.0 | 34200 | 0.6911 | 0.2231 |
| 0.2374 | 476.0 | 34272 | 0.6834 | 0.2234 |
| 0.235 | 477.0 | 34344 | 0.6790 | 0.2223 |
| 0.2292 | 478.0 | 34416 | 0.6857 | 0.2233 |
| 0.2243 | 479.0 | 34488 | 0.6737 | 0.2243 |
| 0.235 | 480.0 | 34560 | 0.6831 | 0.2222 |
| 0.2337 | 481.0 | 34632 | 0.6769 | 0.2207 |
| 0.2258 | 482.0 | 34704 | 0.6784 | 0.2232 |
| 0.2276 | 483.0 | 34776 | 0.6917 | 0.2241 |
| 0.2379 | 484.0 | 34848 | 0.6806 | 0.2251 |
| 0.229 | 485.0 | 34920 | 0.6859 | 0.2232 |
| 0.2312 | 486.0 | 34992 | 0.6850 | 0.2236 |
| 0.2412 | 487.0 | 35064 | 0.6776 | 0.2221 |
| 0.2328 | 488.0 | 35136 | 0.6835 | 0.2230 |
| 0.2373 | 489.0 | 35208 | 0.6879 | 0.2222 |
| 0.234 | 490.0 | 35280 | 0.6868 | 0.2214 |
| 0.2274 | 491.0 | 35352 | 0.6869 | 0.2222 |
| 0.2332 | 492.0 | 35424 | 0.6861 | 0.2214 |
| 0.2291 | 493.0 | 35496 | 0.6881 | 0.2206 |
| 0.2301 | 494.0 | 35568 | 0.6877 | 0.2205 |
| 0.2258 | 495.0 | 35640 | 0.6898 | 0.2203 |
| 0.2351 | 496.0 | 35712 | 0.6883 | 0.2212 |
| 0.2345 | 497.0 | 35784 | 0.6915 | 0.2213 |
| 0.23 | 498.0 | 35856 | 0.6922 | 0.2217 |
| 0.2257 | 499.0 | 35928 | 0.6925 | 0.2216 |
| 0.2273 | 500.0 | 36000 | 0.6914 | 0.2205 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
edub0420/autotrain-graphwerk-1472254090
|
edub0420
| 2022-09-15T18:00:54Z | 190 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:edub0420/autotrain-data-graphwerk",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-15T17:59:55Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- edub0420/autotrain-data-graphwerk
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.8959954972786571
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1472254090
- CO2 Emissions (in grams): 0.8960
## Validation Metrics
- Loss: 0.004
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
|
edub0420/autotrain-graphwerk-1472254089
|
edub0420
| 2022-09-15T18:00:49Z | 190 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:edub0420/autotrain-data-graphwerk",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-15T17:59:52Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- edub0420/autotrain-data-graphwerk
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.0037659513202956607
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1472254089
- CO2 Emissions (in grams): 0.0038
## Validation Metrics
- Loss: 0.005
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
|
valadhi/swin-tiny-patch4-window7-224-finetuned-agrivision
|
valadhi
| 2022-09-15T17:21:42Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-09-08T14:40:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-agrivision
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9202733485193622
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-agrivision
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3605
- Accuracy: 0.9203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5913 | 1.0 | 31 | 0.7046 | 0.7175 |
| 0.1409 | 2.0 | 62 | 0.8423 | 0.6788 |
| 0.0825 | 3.0 | 93 | 0.6224 | 0.7654 |
| 0.0509 | 4.0 | 124 | 0.4379 | 0.8360 |
| 0.0439 | 5.0 | 155 | 0.1706 | 0.9317 |
| 0.0107 | 6.0 | 186 | 0.1914 | 0.9362 |
| 0.0134 | 7.0 | 217 | 0.2491 | 0.9089 |
| 0.0338 | 8.0 | 248 | 0.2119 | 0.9362 |
| 0.0306 | 9.0 | 279 | 0.4502 | 0.8610 |
| 0.0054 | 10.0 | 310 | 0.4990 | 0.8747 |
| 0.0033 | 11.0 | 341 | 0.2746 | 0.9112 |
| 0.0021 | 12.0 | 372 | 0.2501 | 0.9317 |
| 0.0068 | 13.0 | 403 | 0.1883 | 0.9522 |
| 0.0038 | 14.0 | 434 | 0.3672 | 0.9134 |
| 0.0006 | 15.0 | 465 | 0.2275 | 0.9408 |
| 0.0011 | 16.0 | 496 | 0.3349 | 0.9134 |
| 0.0017 | 17.0 | 527 | 0.3329 | 0.9157 |
| 0.0007 | 18.0 | 558 | 0.2508 | 0.9317 |
| 0.0023 | 19.0 | 589 | 0.2338 | 0.9385 |
| 0.0003 | 20.0 | 620 | 0.3193 | 0.9226 |
| 0.002 | 21.0 | 651 | 0.4604 | 0.9043 |
| 0.0023 | 22.0 | 682 | 0.3338 | 0.9203 |
| 0.005 | 23.0 | 713 | 0.2925 | 0.9271 |
| 0.0001 | 24.0 | 744 | 0.2022 | 0.9522 |
| 0.0002 | 25.0 | 775 | 0.2699 | 0.9339 |
| 0.0007 | 26.0 | 806 | 0.2603 | 0.9385 |
| 0.0005 | 27.0 | 837 | 0.4120 | 0.9134 |
| 0.0003 | 28.0 | 868 | 0.3550 | 0.9203 |
| 0.0008 | 29.0 | 899 | 0.3657 | 0.9203 |
| 0.0 | 30.0 | 930 | 0.3605 | 0.9203 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sd-concepts-library/thalasin
|
sd-concepts-library
| 2022-09-15T17:17:24Z | 0 | 3 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-15T17:07:08Z |
---
license: mit
---
### Thalasin on Stable Diffusion
This is the `<thalasin-plus>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
This is based on the work of [Gooseworx](https://twitter.com/GooseworxMusic)
Here is the new concept you will be able to use as an `object`:
















|
reinoudbosch/xlm-roberta-base-finetuned-panx-fr
|
reinoudbosch
| 2022-09-15T17:16:21Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-15T17:06:54Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8375924680564896
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2794
- F1: 0.8376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5774 | 1.0 | 191 | 0.3212 | 0.7894 |
| 0.2661 | 2.0 | 382 | 0.2737 | 0.8292 |
| 0.1756 | 3.0 | 573 | 0.2794 | 0.8376 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.0
|
sd-concepts-library/ddattender
|
sd-concepts-library
| 2022-09-15T16:26:12Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-15T16:26:08Z |
---
license: mit
---
### ddattender on Stable Diffusion
This is the `<ddattender>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
reinoudbosch/xlm-roberta-base-finetuned-panx-de
|
reinoudbosch
| 2022-09-15T16:12:43Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-15T15:42:01Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8633935674508466
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1344
- F1: 0.8634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2588 | 1.0 | 525 | 0.1676 | 0.8194 |
| 0.1318 | 2.0 | 1050 | 0.1326 | 0.8513 |
| 0.084 | 3.0 | 1575 | 0.1344 | 0.8634 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.0
|
VanessaSchenkel/pt-opus-news
|
VanessaSchenkel
| 2022-09-15T16:07:59Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:news_commentary",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-09-15T15:30:08Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- news_commentary
metrics:
- bleu
model-index:
- name: pt-opus-news
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: news_commentary
type: news_commentary
config: en-pt
split: train
args: en-pt
metrics:
- name: Bleu
type: bleu
value: 37.5501808262607
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pt-opus-news
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on the news_commentary dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0975
- Bleu: 37.5502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.22.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/pranshuj73
|
huggingtweets
| 2022-09-15T15:51:01Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-15T15:48:43Z |
---
language: en
thumbnail: http://www.huggingtweets.com/pranshuj73/1663257057221/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1523333450291630080/Eh3DlhQT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Pranshu Jha ⚡</div>
<div style="text-align: center; font-size: 14px;">@pranshuj73</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Pranshu Jha ⚡.
| Data | Pranshu Jha ⚡ |
| --- | --- |
| Tweets downloaded | 1828 |
| Retweets | 249 |
| Short tweets | 136 |
| Tweets kept | 1443 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3k1j04sq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pranshuj73's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29xrmfw8) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29xrmfw8/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pranshuj73')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sd-concepts-library/mtg-card
|
sd-concepts-library
| 2022-09-15T15:24:00Z | 0 | 3 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-15T15:23:55Z |
---
license: mit
---
### MTG card on Stable Diffusion
This is the `<mtg-card>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
gee3/baba
|
gee3
| 2022-09-15T15:23:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-09-15T15:21:21Z |
the wolf has a brown top hat in china
license: unknown
the wolf has a brown top hat in china
the wolf has a brown top hat in china
|
davidfisher/distilbert-base-uncased-finetuned-cola
|
davidfisher
| 2022-09-15T15:04:57Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-15T13:22:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5474713423103301
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5254
- Matthews Correlation: 0.5475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5221 | 1.0 | 535 | 0.5360 | 0.4307 |
| 0.3491 | 2.0 | 1070 | 0.5128 | 0.4972 |
| 0.2382 | 3.0 | 1605 | 0.5254 | 0.5475 |
| 0.1756 | 4.0 | 2140 | 0.7479 | 0.5330 |
| 0.1248 | 5.0 | 2675 | 0.7978 | 0.5414 |
### Framework versions
- Transformers 4.22.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sd-concepts-library/accurate-angel
|
sd-concepts-library
| 2022-09-15T15:01:10Z | 0 | 17 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-15T15:00:59Z |
---
license: mit
---
### Accurate Angel on Stable Diffusion
This is the `<accurate-angel>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
Padomin/t5-base-TEDxJP-0front-1body-10rear-order-RB
|
Padomin
| 2022-09-15T14:52:42Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:te_dx_jp",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-15T02:52:08Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- te_dx_jp
model-index:
- name: t5-base-TEDxJP-0front-1body-10rear-order-RB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-0front-1body-10rear-order-RB
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4749
- Wer: 0.1754
- Mer: 0.1696
- Wil: 0.2575
- Wip: 0.7425
- Hits: 55482
- Substitutions: 6478
- Deletions: 2627
- Insertions: 2225
- Cer: 0.1370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:|
| 0.637 | 1.0 | 1457 | 0.4932 | 0.2359 | 0.2179 | 0.3082 | 0.6918 | 54682 | 6909 | 2996 | 5331 | 0.2100 |
| 0.5501 | 2.0 | 2914 | 0.4572 | 0.1831 | 0.1766 | 0.2655 | 0.7345 | 55134 | 6575 | 2878 | 2370 | 0.1461 |
| 0.5505 | 3.0 | 4371 | 0.4470 | 0.1787 | 0.1728 | 0.2609 | 0.7391 | 55267 | 6494 | 2826 | 2222 | 0.1400 |
| 0.4921 | 4.0 | 5828 | 0.4426 | 0.1794 | 0.1730 | 0.2606 | 0.7394 | 55420 | 6468 | 2699 | 2423 | 0.1407 |
| 0.4465 | 5.0 | 7285 | 0.4507 | 0.1783 | 0.1721 | 0.2596 | 0.7404 | 55420 | 6458 | 2709 | 2351 | 0.1390 |
| 0.3557 | 6.0 | 8742 | 0.4567 | 0.1768 | 0.1708 | 0.2585 | 0.7415 | 55416 | 6459 | 2712 | 2245 | 0.1401 |
| 0.3367 | 7.0 | 10199 | 0.4613 | 0.1772 | 0.1709 | 0.2589 | 0.7411 | 55505 | 6497 | 2585 | 2363 | 0.1387 |
| 0.328 | 8.0 | 11656 | 0.4624 | 0.1769 | 0.1708 | 0.2587 | 0.7413 | 55442 | 6478 | 2667 | 2278 | 0.1383 |
| 0.2992 | 9.0 | 13113 | 0.4726 | 0.1764 | 0.1704 | 0.2580 | 0.7420 | 55461 | 6463 | 2663 | 2264 | 0.1378 |
| 0.2925 | 10.0 | 14570 | 0.4749 | 0.1754 | 0.1696 | 0.2575 | 0.7425 | 55482 | 6478 | 2627 | 2225 | 0.1370 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dwisaji/bert-base-indonesia-sentiment-analysis
|
dwisaji
| 2022-09-15T14:51:25Z | 179 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-15T10:15:31Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ModelCP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ModelCP
This model is a fine-tuned version of [cahya/bert-base-indonesian-522M](https://huggingface.co/cahya/bert-base-indonesian-522M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1414
- Accuracy: 0.955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 288 | 0.7729 | 0.655 |
| 0.7913 | 2.0 | 576 | 0.4324 | 0.845 |
| 0.7913 | 3.0 | 864 | 0.3035 | 0.91 |
| 0.4859 | 4.0 | 1152 | 0.1832 | 0.94 |
| 0.4859 | 5.0 | 1440 | 0.1414 | 0.955 |
### Framework versions
- Transformers 4.22.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sd-concepts-library/altvent
|
sd-concepts-library
| 2022-09-15T14:49:53Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-15T14:49:47Z |
---
license: mit
---
### AltVent on Stable Diffusion
This is the `<AltVent>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:

|
EricPeter/en_pipeline
|
EricPeter
| 2022-09-15T13:06:49Z | 0 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-09-13T13:02:21Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.934169279
- name: NER Recall
type: recall
value: 0.9445324881
- name: NER F Score
type: f_score
value: 0.939322301
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (20 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `APPLICATIONS`, `COLLEGE`, `COMMENT`, `CURRENCY`, `FIGURE`, `FURNITURE`, `GADGET`, `GPE`, `INSTITUITIONS`, `LOCATION`, `ORG`, `PEOPLE`, `PERIOD`, `PERSON`, `PROGRAM`, `SHELTER`, `SKILL`, `TIME`, `WEATHER CONDITION`, `YEAR` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 93.93 |
| `ENTS_P` | 93.42 |
| `ENTS_R` | 94.45 |
| `TOK2VEC_LOSS` | 25728.50 |
| `NER_LOSS` | 421749.70 |
|
weiyiyi/try-1
|
weiyiyi
| 2022-09-15T12:56:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-09-15T12:51:45Z |
---
license: afl-3.0
男性
银色长发
夜色
月光
清冷
|
Avigam92/CLT-Place
|
Avigam92
| 2022-09-15T12:04:41Z | 7 | 0 |
keras
|
[
"keras",
"tf-keras",
"distilbert",
"region:us"
] | null | 2022-09-15T12:00:23Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 4.999999873689376e-05 |
| decay | 1e-07 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
Padomin/t5-base-TEDxJP-0front-1body-5rear-order-RB
|
Padomin
| 2022-09-15T12:04:32Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:te_dx_jp",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-15T02:52:26Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- te_dx_jp
model-index:
- name: t5-base-TEDxJP-0front-1body-5rear-order-RB
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-0front-1body-5rear-order-RB
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4744
- Wer: 0.1790
- Mer: 0.1727
- Wil: 0.2610
- Wip: 0.7390
- Hits: 55379
- Substitutions: 6518
- Deletions: 2690
- Insertions: 2353
- Cer: 0.1409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:|
| 0.6463 | 1.0 | 1457 | 0.4971 | 0.2539 | 0.2313 | 0.3198 | 0.6802 | 54480 | 6786 | 3321 | 6290 | 0.2360 |
| 0.5488 | 2.0 | 2914 | 0.4629 | 0.1840 | 0.1776 | 0.2664 | 0.7336 | 55044 | 6557 | 2986 | 2342 | 0.1488 |
| 0.553 | 3.0 | 4371 | 0.4522 | 0.1792 | 0.1734 | 0.2615 | 0.7385 | 55160 | 6487 | 2940 | 2145 | 0.1421 |
| 0.4962 | 4.0 | 5828 | 0.4488 | 0.1801 | 0.1737 | 0.2615 | 0.7385 | 55350 | 6484 | 2753 | 2395 | 0.1424 |
| 0.4629 | 5.0 | 7285 | 0.4534 | 0.1794 | 0.1732 | 0.2617 | 0.7383 | 55330 | 6540 | 2717 | 2330 | 0.1407 |
| 0.3637 | 6.0 | 8742 | 0.4577 | 0.1797 | 0.1732 | 0.2614 | 0.7386 | 55402 | 6516 | 2669 | 2421 | 0.1412 |
| 0.3499 | 7.0 | 10199 | 0.4645 | 0.1780 | 0.1719 | 0.2598 | 0.7402 | 55411 | 6486 | 2690 | 2323 | 0.1393 |
| 0.3261 | 8.0 | 11656 | 0.4660 | 0.1785 | 0.1722 | 0.2604 | 0.7396 | 55416 | 6512 | 2659 | 2358 | 0.1400 |
| 0.3089 | 9.0 | 13113 | 0.4719 | 0.1790 | 0.1727 | 0.2613 | 0.7387 | 55371 | 6549 | 2667 | 2342 | 0.1407 |
| 0.3024 | 10.0 | 14570 | 0.4744 | 0.1790 | 0.1727 | 0.2610 | 0.7390 | 55379 | 6518 | 2690 | 2353 | 0.1409 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Avigam92/CLT-Social
|
Avigam92
| 2022-09-15T11:56:13Z | 8 | 0 |
keras
|
[
"keras",
"tf-keras",
"distilbert",
"region:us"
] | null | 2022-09-15T11:51:57Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 4.999999873689376e-05 |
| decay | 1e-07 |
| beta_1 | 0.8999999761581421 |
| beta_2 | 0.9990000128746033 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
Padomin/t5-base-TEDxJP-6front-1body-6rear
|
Padomin
| 2022-09-15T11:44:39Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:te_dx_jp",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-14T06:52:48Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- te_dx_jp
model-index:
- name: t5-base-TEDxJP-6front-1body-6rear
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-6front-1body-6rear
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4394
- Wer: 0.1704
- Mer: 0.1647
- Wil: 0.2508
- Wip: 0.7492
- Hits: 55836
- Substitutions: 6340
- Deletions: 2411
- Insertions: 2256
- Cer: 0.1351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:------:|:-----:|:-------------:|:---------:|:----------:|:------:|
| 0.6164 | 1.0 | 1457 | 0.4627 | 0.2224 | 0.2073 | 0.2961 | 0.7039 | 54939 | 6736 | 2912 | 4716 | 0.1954 |
| 0.5064 | 2.0 | 2914 | 0.4222 | 0.1785 | 0.1722 | 0.2591 | 0.7409 | 55427 | 6402 | 2758 | 2370 | 0.1416 |
| 0.4909 | 3.0 | 4371 | 0.4147 | 0.1717 | 0.1664 | 0.2514 | 0.7486 | 55563 | 6218 | 2806 | 2068 | 0.1350 |
| 0.4365 | 4.0 | 5828 | 0.4120 | 0.1722 | 0.1661 | 0.2525 | 0.7475 | 55848 | 6373 | 2366 | 2385 | 0.1380 |
| 0.3954 | 5.0 | 7285 | 0.4145 | 0.1715 | 0.1655 | 0.2517 | 0.7483 | 55861 | 6355 | 2371 | 2351 | 0.1384 |
| 0.3181 | 6.0 | 8742 | 0.4178 | 0.1710 | 0.1650 | 0.2509 | 0.7491 | 55891 | 6326 | 2370 | 2348 | 0.1368 |
| 0.2971 | 7.0 | 10199 | 0.4261 | 0.1698 | 0.1640 | 0.2497 | 0.7503 | 55900 | 6304 | 2383 | 2279 | 0.1348 |
| 0.2754 | 8.0 | 11656 | 0.4299 | 0.1703 | 0.1645 | 0.2504 | 0.7496 | 55875 | 6320 | 2392 | 2288 | 0.1354 |
| 0.2604 | 9.0 | 13113 | 0.4371 | 0.1702 | 0.1644 | 0.2506 | 0.7494 | 55864 | 6343 | 2380 | 2267 | 0.1347 |
| 0.2477 | 10.0 | 14570 | 0.4394 | 0.1704 | 0.1647 | 0.2508 | 0.7492 | 55836 | 6340 | 2411 | 2256 | 0.1351 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.