modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-27 18:28:06
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 523
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-27 18:27:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jl8771/sd-class-butterflies-32
|
jl8771
| 2022-11-29T05:41:50Z | 37 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-29T05:41:45Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(jl8771/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
manter/momoko
|
manter
| 2022-11-29T05:21:52Z | 0 | 8 | null |
[
"doi:10.57967/hf/0147",
"license:unknown",
"region:us"
] | null | 2022-11-29T03:32:48Z |
---
license: unknown
---
This was a stable diffusion based model that was based off of anythingv3 and momoko which I still don't know the orgin of.
(personal story: How I fond this was from going to a outdated stable diffusion web ui link and hitting generate. It came out good so I googled it and found this.)
Sorce: https://www.kaggle.com/code/inmine/novelai-with-webui-stable-diffusion-version/data, https://www.kaggle.com/datasets/inmine/momoko
btw here is a prompt (prompt:Masterpiece, best quality,)(negitive prompt:lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewerdigits, cropped, worst quality, low quality, normal quality, ipeg artifacts, signature, watermark,username, blurry)
That's what I found work's the best, The main thing it generates is woman so be warned.
|
Shubham09/whisper63filescheck
|
Shubham09
| 2022-11-29T05:12:22Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-29T05:07:16Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper63filescheck
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper63filescheck
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0638
- Wer: 23.7647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1324 | 14.29 | 100 | 1.0638 | 23.7647 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Urigavilan03/Tiempo
|
Urigavilan03
| 2022-11-29T05:12:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-29T05:09:08Z |
un reloj de bolsillo antiguo en medio de unas hojas escritas en cursiva desenfocada
|
smilton/mt5-large-qasrl-es-p2-question
|
smilton
| 2022-11-29T04:36:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-29T03:55:16Z |
---
language:
- es
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mt5-large-qasrl-es-p2-question
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-large-qasrl-es-p2-question
This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/google/mt5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.11.0
- Datasets 2.7.1
- Tokenizers 0.11.0
|
Alred/bart-base-finetuned-summarization-cnn-ver3
|
Alred
| 2022-11-29T04:10:37Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-11-29T03:38:16Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: bart-base-finetuned-summarization-cnn-ver3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-summarization-cnn-ver3
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9827
- Bertscore-mean-precision: 0.8811
- Bertscore-mean-recall: 0.8554
- Bertscore-mean-f1: 0.8679
- Bertscore-median-precision: 0.8809
- Bertscore-median-recall: 0.8545
- Bertscore-median-f1: 0.8669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bertscore-mean-precision | Bertscore-mean-recall | Bertscore-mean-f1 | Bertscore-median-precision | Bertscore-median-recall | Bertscore-median-f1 |
|:-------------:|:-----:|:----:|:---------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:|
| 3.632 | 1.0 | 5742 | 2.9827 | 0.8811 | 0.8554 | 0.8679 | 0.8809 | 0.8545 | 0.8669 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
NSandra/distilbert-base-uncased-finetuned-ner
|
NSandra
| 2022-11-29T04:09:17Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-29T03:55:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2393
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 1 | 1.5491 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 2.0 | 2 | 1.3278 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 3.0 | 3 | 1.2393 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
nhanv/cv_parser
|
nhanv
| 2022-11-29T04:00:56Z | 167 | 3 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-29T03:23:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: cv-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cv-ner
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0956
- Precision: 0.8906
- Recall: 0.9325
- F1: 0.9111
- Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 91 | 0.2049 | 0.6618 | 0.7362 | 0.6970 | 0.9534 |
| 0.5036 | 2.0 | 182 | 0.1156 | 0.7873 | 0.8630 | 0.8234 | 0.9722 |
| 0.1442 | 3.0 | 273 | 0.1078 | 0.8262 | 0.9039 | 0.8633 | 0.9771 |
| 0.0757 | 4.0 | 364 | 0.1179 | 0.8652 | 0.9059 | 0.8851 | 0.9780 |
| 0.0526 | 5.0 | 455 | 0.0907 | 0.888 | 0.9080 | 0.8979 | 0.9837 |
| 0.0342 | 6.0 | 546 | 0.0972 | 0.8926 | 0.9346 | 0.9131 | 0.9832 |
| 0.0245 | 7.0 | 637 | 0.1064 | 0.8937 | 0.9284 | 0.9107 | 0.9834 |
| 0.0188 | 8.0 | 728 | 0.0965 | 0.8980 | 0.9366 | 0.9169 | 0.9850 |
| 0.0159 | 9.0 | 819 | 0.0999 | 0.91 | 0.9305 | 0.9201 | 0.9846 |
| 0.0141 | 10.0 | 910 | 0.0956 | 0.8906 | 0.9325 | 0.9111 | 0.9851 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
ryvalenza/sd-class-butterflies-32
|
ryvalenza
| 2022-11-29T04:00:32Z | 34 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-29T04:00:01Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(ryvalenza/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
jeraldflowers/vit_model
|
jeraldflowers
| 2022-11-29T03:51:31Z | 188 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-27T05:06:17Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
widget:
- src: https://huggingface.co/jeraldflowers/vit_model/blob/main/healthy.jpeg
example_title: Healthy
- src: https://huggingface.co/jeraldflowers/vit_model/blob/main/bean_rust.jpeg
example_title: Bean Rust
model-index:
- name: vit_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0095
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1526 | 3.85 | 500 | 0.0095 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
smilton/mt5-large-qasrl-es-p1-question
|
smilton
| 2022-11-29T03:36:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-29T02:56:06Z |
---
language:
- es
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mt5-large-qasrl-es-p1-question
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-large-qasrl-es-p1-question
This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/google/mt5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.11.0
- Datasets 2.7.1
- Tokenizers 0.11.0
|
UCSYNLP/MyanBERTa
|
UCSYNLP
| 2022-11-29T03:35:58Z | 297 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"MyanBERTa",
"Myanmar",
"BERT",
"RoBERTa",
"my",
"dataset:MyCorpus",
"dataset:Web",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-07-25T06:57:10Z |
---
language: my
tags:
- MyanBERTa
- Myanmar
- BERT
- RoBERTa
license: apache-2.0
datasets:
- MyCorpus
- Web
---
## Model description
This model is a BERT based Myanmar pre-trained language model.
MyanBERTa was pre-trained for 528K steps on a word segmented Myanmar dataset consisting of 5,992,299 sentences (136M words).
As the tokenizer, byte-leve BPE tokenizer of 30,522 subword units which is learned after word segmentation is applied.
Cite this work as:
```
Aye Mya Hlaing, Win Pa Pa, "MyanBERTa: A Pre-trained Language Model For
Myanmar", In Proceedings of 2022 International Conference on Communication and Computer Research (ICCR2022), November 2022, Seoul, Republic of Korea
```
[Download Paper](https://journal-home.s3.ap-northeast-2.amazonaws.com/site/iccr2022/abs/QOHFI-0004.pdf)
|
tomekkorbak/amazing_payne
|
tomekkorbak
| 2022-11-29T03:28:47Z | 0 | 0 | null |
[
"generated_from_trainer",
"en",
"dataset:tomekkorbak/detoxify-pile-chunk3-0-50000",
"dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000",
"dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-11-29T03:28:38Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/detoxify-pile-chunk3-0-50000
- tomekkorbak/detoxify-pile-chunk3-50000-100000
- tomekkorbak/detoxify-pile-chunk3-100000-150000
- tomekkorbak/detoxify-pile-chunk3-150000-200000
- tomekkorbak/detoxify-pile-chunk3-200000-250000
- tomekkorbak/detoxify-pile-chunk3-250000-300000
- tomekkorbak/detoxify-pile-chunk3-300000-350000
- tomekkorbak/detoxify-pile-chunk3-350000-400000
- tomekkorbak/detoxify-pile-chunk3-400000-450000
- tomekkorbak/detoxify-pile-chunk3-450000-500000
- tomekkorbak/detoxify-pile-chunk3-500000-550000
- tomekkorbak/detoxify-pile-chunk3-550000-600000
- tomekkorbak/detoxify-pile-chunk3-600000-650000
- tomekkorbak/detoxify-pile-chunk3-650000-700000
- tomekkorbak/detoxify-pile-chunk3-700000-750000
- tomekkorbak/detoxify-pile-chunk3-750000-800000
- tomekkorbak/detoxify-pile-chunk3-800000-850000
- tomekkorbak/detoxify-pile-chunk3-850000-900000
- tomekkorbak/detoxify-pile-chunk3-900000-950000
- tomekkorbak/detoxify-pile-chunk3-950000-1000000
- tomekkorbak/detoxify-pile-chunk3-1000000-1050000
- tomekkorbak/detoxify-pile-chunk3-1050000-1100000
- tomekkorbak/detoxify-pile-chunk3-1100000-1150000
- tomekkorbak/detoxify-pile-chunk3-1150000-1200000
- tomekkorbak/detoxify-pile-chunk3-1200000-1250000
- tomekkorbak/detoxify-pile-chunk3-1250000-1300000
- tomekkorbak/detoxify-pile-chunk3-1300000-1350000
- tomekkorbak/detoxify-pile-chunk3-1350000-1400000
- tomekkorbak/detoxify-pile-chunk3-1400000-1450000
- tomekkorbak/detoxify-pile-chunk3-1450000-1500000
- tomekkorbak/detoxify-pile-chunk3-1500000-1550000
- tomekkorbak/detoxify-pile-chunk3-1550000-1600000
- tomekkorbak/detoxify-pile-chunk3-1600000-1650000
- tomekkorbak/detoxify-pile-chunk3-1650000-1700000
- tomekkorbak/detoxify-pile-chunk3-1700000-1750000
- tomekkorbak/detoxify-pile-chunk3-1750000-1800000
- tomekkorbak/detoxify-pile-chunk3-1800000-1850000
- tomekkorbak/detoxify-pile-chunk3-1850000-1900000
- tomekkorbak/detoxify-pile-chunk3-1900000-1950000
model-index:
- name: amazing_payne
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazing_payne
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.00065,
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'amazing_payne',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/jfkodfu1
|
JiHoon-kim/bert-base-klue-ynat-finetuned
|
JiHoon-kim
| 2022-11-29T03:25:05Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"mrc",
"ko",
"dataset:klue",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-29T03:21:37Z |
---
language: ko
tags:
- bert
- mrc
datasets:
- klue
license: cc-by-sa-4.0
---
# 인프런 강의용 checkpoint
KLUE의 YNAT task에 파인튜닝된 모델입니다.
|
JiHoon-kim/bert-base-klue-mrc-finetuned
|
JiHoon-kim
| 2022-11-29T03:16:57Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"mrc",
"ko",
"dataset:klue",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-29T03:05:13Z |
---
language: ko
tags:
- bert
- mrc
datasets:
- klue
license: cc-by-sa-4.0
---
# 인프런 강의용 checkpoint
KLUE의 MRC task에 파인튜닝된 모델입니다.
|
kejian/final-cond-25-0.25
|
kejian
| 2022-11-29T03:14:56Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-11-28T01:55:18Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: kejian/final-cond-25-0.25
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/final-cond-25-0.25
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>',
'drop_token_fraction': 0.25,
'misaligned_prefix': '<|misaligned|>',
'threshold': 0.000475},
'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True},
'generation': {'batch_size': 64,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 704,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 512,
'prefix': '<|aligned|>',
'use_prompt_for_scoring': False},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 512,
'prefix': '<|aligned|>',
'prompt_before_control': True,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096,
'prefix': '<|aligned|>'},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'num_additional_tokens': 2,
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small',
'special_tokens': ['<|aligned|>', '<|misaligned|>']},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/final-cond-25-0.25',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0008,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 5000,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/ssntrqry
|
neulab/omnitab-large-finetuned-wtq
|
neulab
| 2022-11-29T02:11:26Z | 4,399 | 7 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-10-26T00:56:04Z |
---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-finetuned-wtq` (based on BART architecture) is initialized with `neulab/omnitab-large` and fine-tuned on [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-finetuned-wtq")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-finetuned-wtq")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
neulab/omnitab-large-16shot-finetuned-wtq-16shot
|
neulab
| 2022-11-29T02:10:07Z | 52 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-11-29T01:48:24Z |
---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-16shot-finetuned-wtq-16shot` (based on BART architecture) is initialized with `neulab/omnitab-large-16shot` and fine-tuned on [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) in the 16-shot setting.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-16shot-finetuned-wtq-16shot")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-16shot-finetuned-wtq-16shot")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
neulab/omnitab-large-16shot
|
neulab
| 2022-11-29T02:07:05Z | 48 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"tapex",
"table-question-answering",
"en",
"dataset:wikitablequestions",
"arxiv:2207.03637",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
table-question-answering
| 2022-11-29T02:05:27Z |
---
language: en
tags:
- tapex
- table-question-answering
datasets:
- wikitablequestions
---
# OmniTab
OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab).
## Description
`neulab/omnitab-large-16shot` (based on BART architecture) is initialized with `microsoft/tapex-large` and continuously pretrained on natural and synthetic data (SQL2NL model trained in the 16-shot setting).
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-16shot")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-16shot")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# [' 2008']
```
## Reference
```bibtex
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
```
|
alexziweiwang/retrain5_oneTimeTraining_MTL-1epoch
|
alexziweiwang
| 2022-11-29T02:00:29Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-11-29T01:43:16Z |
---
tags:
- generated_from_trainer
model-index:
- name: retrain5_oneTimeTraining_MTL-1epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# retrain5_oneTimeTraining_MTL-1epoch
This model is a fine-tuned version of [alexziweiwang/exp21-uaspeech-foundation](https://huggingface.co/alexziweiwang/exp21-uaspeech-foundation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.1861
- Acc: 0.285
- Wer: 1.1126
- Correct: 57
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:------:|:-------:|:-----:|:------:|
| No log | 0.02 | 5 | 13.9337 | 0.01 | 1.2925 | 2 | 200 | 200 |
| 12.4373 | 0.04 | 10 | 13.7513 | 0.08 | 1.5296 | 16 | 200 | 200 |
| 12.4373 | 0.06 | 15 | 13.5517 | 0.125 | 2.1126 | 25 | 200 | 200 |
| 12.6667 | 0.08 | 20 | 13.3400 | 0.165 | 2.5791 | 33 | 200 | 200 |
| 12.6667 | 0.11 | 25 | 13.1141 | 0.205 | 3.6561 | 41 | 200 | 200 |
| 11.1856 | 0.13 | 30 | 12.8805 | 0.22 | 2.7451 | 44 | 200 | 200 |
| 11.1856 | 0.15 | 35 | 12.6423 | 0.245 | 2.5178 | 49 | 200 | 200 |
| 10.6635 | 0.17 | 40 | 12.4028 | 0.27 | 2.4308 | 54 | 200 | 200 |
| 10.6635 | 0.19 | 45 | 12.1660 | 0.3 | 2.1818 | 60 | 200 | 200 |
| 10.7952 | 0.21 | 50 | 11.9291 | 0.305 | 1.9348 | 61 | 200 | 200 |
| 10.7952 | 0.23 | 55 | 11.6945 | 0.31 | 1.6858 | 62 | 200 | 200 |
| 10.3867 | 0.25 | 60 | 11.4608 | 0.315 | 1.5237 | 63 | 200 | 200 |
| 10.3867 | 0.27 | 65 | 11.2313 | 0.315 | 1.3953 | 63 | 200 | 200 |
| 10.252 | 0.3 | 70 | 11.0102 | 0.315 | 1.3162 | 63 | 200 | 200 |
| 10.252 | 0.32 | 75 | 10.7918 | 0.315 | 1.2826 | 63 | 200 | 200 |
| 10.1788 | 0.34 | 80 | 10.5736 | 0.315 | 1.2628 | 63 | 200 | 200 |
| 10.1788 | 0.36 | 85 | 10.3607 | 0.32 | 1.2391 | 64 | 200 | 200 |
| 9.1361 | 0.38 | 90 | 10.1527 | 0.31 | 1.2253 | 62 | 200 | 200 |
| 9.1361 | 0.4 | 95 | 9.9507 | 0.31 | 1.2036 | 62 | 200 | 200 |
| 9.5447 | 0.42 | 100 | 9.7553 | 0.315 | 1.2095 | 63 | 200 | 200 |
| 9.5447 | 0.44 | 105 | 9.5599 | 0.31 | 1.2016 | 62 | 200 | 200 |
| 9.1579 | 0.46 | 110 | 9.3711 | 0.295 | 1.1996 | 59 | 200 | 200 |
| 9.1579 | 0.48 | 115 | 9.1892 | 0.295 | 1.1897 | 59 | 200 | 200 |
| 7.9217 | 0.51 | 120 | 9.0143 | 0.3 | 1.1858 | 60 | 200 | 200 |
| 7.9217 | 0.53 | 125 | 8.8493 | 0.305 | 1.1719 | 61 | 200 | 200 |
| 8.4439 | 0.55 | 130 | 8.6946 | 0.305 | 1.1739 | 61 | 200 | 200 |
| 8.4439 | 0.57 | 135 | 8.5492 | 0.31 | 1.1581 | 62 | 200 | 200 |
| 8.0639 | 0.59 | 140 | 8.4153 | 0.315 | 1.1502 | 63 | 200 | 200 |
| 8.0639 | 0.61 | 145 | 8.2872 | 0.32 | 1.1482 | 64 | 200 | 200 |
| 8.4173 | 0.63 | 150 | 8.1649 | 0.33 | 1.1443 | 66 | 200 | 200 |
| 8.4173 | 0.65 | 155 | 8.0500 | 0.325 | 1.1403 | 65 | 200 | 200 |
| 7.8991 | 0.67 | 160 | 7.9422 | 0.33 | 1.1364 | 66 | 200 | 200 |
| 7.8991 | 0.7 | 165 | 7.8410 | 0.32 | 1.1344 | 64 | 200 | 200 |
| 6.9206 | 0.72 | 170 | 7.7469 | 0.32 | 1.1304 | 64 | 200 | 200 |
| 6.9206 | 0.74 | 175 | 7.6601 | 0.325 | 1.1285 | 65 | 200 | 200 |
| 7.1911 | 0.76 | 180 | 7.5832 | 0.305 | 1.1206 | 61 | 200 | 200 |
| 7.1911 | 0.78 | 185 | 7.5163 | 0.305 | 1.1225 | 61 | 200 | 200 |
| 7.201 | 0.8 | 190 | 7.4565 | 0.305 | 1.1245 | 61 | 200 | 200 |
| 7.201 | 0.82 | 195 | 7.4049 | 0.295 | 1.1245 | 59 | 200 | 200 |
| 7.1507 | 0.84 | 200 | 7.3568 | 0.295 | 1.1225 | 59 | 200 | 200 |
| 7.1507 | 0.86 | 205 | 7.3139 | 0.3 | 1.1206 | 60 | 200 | 200 |
| 6.6223 | 0.89 | 210 | 7.2774 | 0.295 | 1.1186 | 59 | 200 | 200 |
| 6.6223 | 0.91 | 215 | 7.2469 | 0.295 | 1.1186 | 59 | 200 | 200 |
| 7.1645 | 0.93 | 220 | 7.2220 | 0.295 | 1.1166 | 59 | 200 | 200 |
| 7.1645 | 0.95 | 225 | 7.2041 | 0.29 | 1.1146 | 58 | 200 | 200 |
| 6.2562 | 0.97 | 230 | 7.1921 | 0.29 | 1.1146 | 58 | 200 | 200 |
| 6.2562 | 0.99 | 235 | 7.1861 | 0.285 | 1.1126 | 57 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
huggingtweets/elonmusk-lexfridman
|
huggingtweets
| 2022-11-29T01:35:11Z | 118 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/956331551435960322/OaqR8pAB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Lex Fridman</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-lexfridman</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Lex Fridman.
| Data | Elon Musk | Lex Fridman |
| --- | --- | --- |
| Tweets downloaded | 3198 | 2410 |
| Retweets | 126 | 253 |
| Short tweets | 968 | 49 |
| Tweets kept | 2104 | 2108 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/18nt3c0k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-lexfridman's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ozchvjo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ozchvjo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-lexfridman')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
akmoyu/whisper-medium-mn
|
akmoyu
| 2022-11-29T01:27:26Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"mn",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-27T12:12:01Z |
---
language:
- mn
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Medium Mn - akmoyu
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 42.52948885976409
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Mn - akmoyu
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7233
- Wer: 42.5295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0182 | 7.94 | 1000 | 0.5995 | 46.5269 |
| 0.0027 | 15.87 | 2000 | 0.6499 | 44.2169 |
| 0.0002 | 23.81 | 3000 | 0.7057 | 42.5623 |
| 0.0001 | 31.75 | 4000 | 0.7233 | 42.5295 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
|
matan-diamond/sd-class-butterflies-64
|
matan-diamond
| 2022-11-29T01:27:10Z | 35 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-29T01:26:52Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(matan-diamond/sd-class-butterflies-64)
image = pipeline().images[0]
image
```
|
BunnyViking/bvSketchOutline
|
BunnyViking
| 2022-11-29T01:26:30Z | 0 | 12 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-28T02:53:16Z |
---
license: mit
---
Sketch Outline style - a scratchy concept-art like style to give the appearance of quickly rendered pencil and ink art.
The model is trained on humans, some animals, some structures and a few vehicles but it is best at humans and monsters.
NOTE - the model has been trained with some artistic nudes included and can generate unintended NSFW content on occasion.
Custom style trained off SD 1.5 DDLM
Token: bvSketchOutline
Not using the token (or using prompts like 'stroke' or 'outline') or placing the token at the start or end of the prompt will have different interesting effect.
Higher versions will improve the overall style at the cost of flexibility. The model will skew more toward humans at the higher versions. The higher versions will also create more monstrous animals.
I recommend a confidence between 7.5 and 12.5
v2 2000 - some outline and flexible CFG 7.5 is fine
7.5

12.5

v2 3000 - sketchy and flexible CFG 7.5 is fine
7.5

12.5

v2 4000 - sketchy outline and extra outline strokes. recommend increasing CFG to 12.5 so less flexible
7.5

12.5

v2 5000 - smoother outlines much less flexible, will start skewing strongly toward humans even at 7.5 CFG. At 12.5 CFG it will be sketchier with more outline strokes, almost like v2 2000 in look but at higher quality.
7.5

12.5

v2 6000 - very sketchy and scratchy at 7.5 CFG, more inky, may lose detail. At 12.5 is quite inky in its outlines.
7.5

12.5

v2 7000 - sketchy and many flowing outlines at 7.5 CFG. Can have compromised details. At 12.5 CFG the style becomes very inky and loses detail almost wet watercolour
7.5

12.5

|
dlwh/legal-xlm-base_128k
|
dlwh
| 2022-11-29T00:48:35Z | 4 | 2 |
transformers
|
[
"transformers",
"roberta",
"fill-mask",
"bg",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sk",
"sl",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-29T00:41:54Z |
---
license: apache-2.0
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
dataset:
- joelito/MultiLegalPile_Wikipedia_Filtered
---
Huggingface thinks this is a model, but it's just a tokenizer. Trained on https://huggingface.co/datasets/joelito/MultiLegalPile_Wikipedia_Filtered
|
pig4431/YELP_BERT_5E
|
pig4431
| 2022-11-29T00:43:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-29T00:38:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: YELP_BERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: train
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.9733333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# YELP_BERT_5E
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1867
- Accuracy: 0.9733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5555 | 0.03 | 50 | 0.5569 | 0.74 |
| 0.2815 | 0.06 | 100 | 0.1400 | 0.9533 |
| 0.2736 | 0.1 | 150 | 0.1366 | 0.9533 |
| 0.2444 | 0.13 | 200 | 0.1144 | 0.9667 |
| 0.1778 | 0.16 | 250 | 0.1739 | 0.9533 |
| 0.1656 | 0.19 | 300 | 0.1073 | 0.96 |
| 0.1777 | 0.22 | 350 | 0.1001 | 0.9733 |
| 0.1915 | 0.26 | 400 | 0.1545 | 0.94 |
| 0.1983 | 0.29 | 450 | 0.1158 | 0.94 |
| 0.1858 | 0.32 | 500 | 0.0831 | 0.9667 |
| 0.2024 | 0.35 | 550 | 0.1088 | 0.96 |
| 0.1638 | 0.38 | 600 | 0.1047 | 0.9533 |
| 0.1333 | 0.42 | 650 | 0.1596 | 0.9467 |
| 0.245 | 0.45 | 700 | 0.1273 | 0.96 |
| 0.1786 | 0.48 | 750 | 0.1001 | 0.9667 |
| 0.1859 | 0.51 | 800 | 0.1125 | 0.9467 |
| 0.1764 | 0.54 | 850 | 0.0963 | 0.9533 |
| 0.2151 | 0.58 | 900 | 0.0904 | 0.9533 |
| 0.1152 | 0.61 | 950 | 0.1119 | 0.9667 |
| 0.1564 | 0.64 | 1000 | 0.0788 | 0.9667 |
| 0.1691 | 0.67 | 1050 | 0.0791 | 0.9733 |
| 0.1748 | 0.7 | 1100 | 0.0805 | 0.9667 |
| 0.1531 | 0.74 | 1150 | 0.0839 | 0.9667 |
| 0.1426 | 0.77 | 1200 | 0.0957 | 0.9467 |
| 0.1563 | 0.8 | 1250 | 0.1194 | 0.96 |
| 0.1666 | 0.83 | 1300 | 0.1029 | 0.96 |
| 0.1912 | 0.86 | 1350 | 0.0908 | 0.96 |
| 0.1822 | 0.9 | 1400 | 0.0788 | 0.9733 |
| 0.1339 | 0.93 | 1450 | 0.1134 | 0.96 |
| 0.1512 | 0.96 | 1500 | 0.0739 | 0.9733 |
| 0.1198 | 0.99 | 1550 | 0.0811 | 0.9733 |
| 0.1118 | 1.02 | 1600 | 0.0819 | 0.9733 |
| 0.1508 | 1.06 | 1650 | 0.1114 | 0.9667 |
| 0.0757 | 1.09 | 1700 | 0.1202 | 0.9667 |
| 0.0959 | 1.12 | 1750 | 0.1077 | 0.9667 |
| 0.0849 | 1.15 | 1800 | 0.1009 | 0.9733 |
| 0.0792 | 1.18 | 1850 | 0.0994 | 0.9733 |
| 0.0651 | 1.22 | 1900 | 0.1192 | 0.9733 |
| 0.0909 | 1.25 | 1950 | 0.1129 | 0.9667 |
| 0.0815 | 1.28 | 2000 | 0.1037 | 0.9733 |
| 0.0933 | 1.31 | 2050 | 0.0884 | 0.98 |
| 0.0998 | 1.34 | 2100 | 0.0860 | 0.9733 |
| 0.1099 | 1.38 | 2150 | 0.0793 | 0.98 |
| 0.0712 | 1.41 | 2200 | 0.0831 | 0.9867 |
| 0.1126 | 1.44 | 2250 | 0.0681 | 0.98 |
| 0.0731 | 1.47 | 2300 | 0.1019 | 0.9667 |
| 0.1021 | 1.5 | 2350 | 0.0659 | 0.9733 |
| 0.089 | 1.54 | 2400 | 0.0832 | 0.9733 |
| 0.0967 | 1.57 | 2450 | 0.0766 | 0.98 |
| 0.1015 | 1.6 | 2500 | 0.0803 | 0.9733 |
| 0.0956 | 1.63 | 2550 | 0.0781 | 0.9667 |
| 0.0896 | 1.66 | 2600 | 0.1033 | 0.9667 |
| 0.0925 | 1.7 | 2650 | 0.1036 | 0.9667 |
| 0.1326 | 1.73 | 2700 | 0.0892 | 0.9667 |
| 0.0884 | 1.76 | 2750 | 0.0913 | 0.9667 |
| 0.1061 | 1.79 | 2800 | 0.0821 | 0.9733 |
| 0.1031 | 1.82 | 2850 | 0.0935 | 0.9733 |
| 0.0873 | 1.86 | 2900 | 0.1058 | 0.9733 |
| 0.0957 | 1.89 | 2950 | 0.1025 | 0.9733 |
| 0.1149 | 1.92 | 3000 | 0.0675 | 0.98 |
| 0.0876 | 1.95 | 3050 | 0.1050 | 0.9667 |
| 0.0951 | 1.98 | 3100 | 0.0765 | 0.9733 |
| 0.0643 | 2.02 | 3150 | 0.0691 | 0.98 |
| 0.0551 | 2.05 | 3200 | 0.0765 | 0.98 |
| 0.0609 | 2.08 | 3250 | 0.0717 | 0.98 |
| 0.0268 | 2.11 | 3300 | 0.0780 | 0.98 |
| 0.0338 | 2.14 | 3350 | 0.0980 | 0.9733 |
| 0.0287 | 2.18 | 3400 | 0.1118 | 0.9733 |
| 0.0456 | 2.21 | 3450 | 0.1186 | 0.9733 |
| 0.0294 | 2.24 | 3500 | 0.1162 | 0.9733 |
| 0.0551 | 2.27 | 3550 | 0.1057 | 0.98 |
| 0.0445 | 2.3 | 3600 | 0.1042 | 0.9733 |
| 0.0233 | 2.34 | 3650 | 0.1164 | 0.9733 |
| 0.0695 | 2.37 | 3700 | 0.1189 | 0.9733 |
| 0.0524 | 2.4 | 3750 | 0.1198 | 0.9667 |
| 0.0457 | 2.43 | 3800 | 0.1479 | 0.9733 |
| 0.0289 | 2.46 | 3850 | 0.1214 | 0.9733 |
| 0.0432 | 2.5 | 3900 | 0.1740 | 0.9733 |
| 0.0425 | 2.53 | 3950 | 0.1167 | 0.9733 |
| 0.022 | 2.56 | 4000 | 0.1667 | 0.9733 |
| 0.063 | 2.59 | 4050 | 0.1392 | 0.9733 |
| 0.0388 | 2.62 | 4100 | 0.1376 | 0.9733 |
| 0.0759 | 2.66 | 4150 | 0.1400 | 0.9733 |
| 0.0526 | 2.69 | 4200 | 0.1232 | 0.9733 |
| 0.049 | 2.72 | 4250 | 0.1247 | 0.9667 |
| 0.0397 | 2.75 | 4300 | 0.1288 | 0.9667 |
| 0.0346 | 2.78 | 4350 | 0.1243 | 0.9733 |
| 0.0525 | 2.82 | 4400 | 0.1405 | 0.9733 |
| 0.0566 | 2.85 | 4450 | 0.1145 | 0.98 |
| 0.029 | 2.88 | 4500 | 0.1246 | 0.9733 |
| 0.043 | 2.91 | 4550 | 0.1308 | 0.9733 |
| 0.0613 | 2.94 | 4600 | 0.1125 | 0.9733 |
| 0.0704 | 2.98 | 4650 | 0.0872 | 0.98 |
| 0.0169 | 3.01 | 4700 | 0.1046 | 0.9733 |
| 0.0277 | 3.04 | 4750 | 0.1193 | 0.9733 |
| 0.0159 | 3.07 | 4800 | 0.1107 | 0.98 |
| 0.0013 | 3.1 | 4850 | 0.1342 | 0.9733 |
| 0.0063 | 3.13 | 4900 | 0.1425 | 0.9733 |
| 0.0131 | 3.17 | 4950 | 0.1261 | 0.98 |
| 0.0071 | 3.2 | 5000 | 0.1424 | 0.9733 |
| 0.0315 | 3.23 | 5050 | 0.1347 | 0.9733 |
| 0.0045 | 3.26 | 5100 | 0.1582 | 0.9733 |
| 0.0107 | 3.29 | 5150 | 0.1426 | 0.9733 |
| 0.014 | 3.33 | 5200 | 0.1298 | 0.98 |
| 0.0281 | 3.36 | 5250 | 0.1485 | 0.9733 |
| 0.0101 | 3.39 | 5300 | 0.1340 | 0.9733 |
| 0.0002 | 3.42 | 5350 | 0.1635 | 0.9733 |
| 0.0358 | 3.45 | 5400 | 0.1853 | 0.9733 |
| 0.0107 | 3.49 | 5450 | 0.1812 | 0.96 |
| 0.0157 | 3.52 | 5500 | 0.1828 | 0.9667 |
| 0.0336 | 3.55 | 5550 | 0.1839 | 0.9733 |
| 0.0095 | 3.58 | 5600 | 0.2067 | 0.9667 |
| 0.0216 | 3.61 | 5650 | 0.2004 | 0.9667 |
| 0.0136 | 3.65 | 5700 | 0.1892 | 0.9667 |
| 0.0041 | 3.68 | 5750 | 0.2082 | 0.9667 |
| 0.0411 | 3.71 | 5800 | 0.1835 | 0.9667 |
| 0.0233 | 3.74 | 5850 | 0.1713 | 0.9733 |
| 0.0078 | 3.77 | 5900 | 0.2228 | 0.9667 |
| 0.01 | 3.81 | 5950 | 0.2097 | 0.9667 |
| 0.0063 | 3.84 | 6000 | 0.2105 | 0.9667 |
| 0.0132 | 3.87 | 6050 | 0.2070 | 0.9667 |
| 0.0134 | 3.9 | 6100 | 0.1995 | 0.9667 |
| 0.0278 | 3.93 | 6150 | 0.1663 | 0.9733 |
| 0.0211 | 3.97 | 6200 | 0.1534 | 0.9667 |
| 0.0237 | 4.0 | 6250 | 0.1954 | 0.9667 |
| 0.0201 | 4.03 | 6300 | 0.1684 | 0.96 |
| 0.0013 | 4.06 | 6350 | 0.2022 | 0.9667 |
| 0.0002 | 4.09 | 6400 | 0.1783 | 0.9667 |
| 0.011 | 4.13 | 6450 | 0.2207 | 0.9667 |
| 0.0117 | 4.16 | 6500 | 0.1916 | 0.9667 |
| 0.0083 | 4.19 | 6550 | 0.1900 | 0.96 |
| 0.007 | 4.22 | 6600 | 0.1782 | 0.9733 |
| 0.0074 | 4.25 | 6650 | 0.2034 | 0.9667 |
| 0.0004 | 4.29 | 6700 | 0.1852 | 0.9667 |
| 0.0002 | 4.32 | 6750 | 0.2156 | 0.9667 |
| 0.0069 | 4.35 | 6800 | 0.2257 | 0.9667 |
| 0.0056 | 4.38 | 6850 | 0.2214 | 0.9667 |
| 0.016 | 4.41 | 6900 | 0.2035 | 0.9667 |
| 0.0055 | 4.45 | 6950 | 0.1800 | 0.9733 |
| 0.0 | 4.48 | 7000 | 0.1819 | 0.9733 |
| 0.0001 | 4.51 | 7050 | 0.1867 | 0.9733 |
| 0.0 | 4.54 | 7100 | 0.1880 | 0.9733 |
| 0.0006 | 4.57 | 7150 | 0.2108 | 0.9667 |
| 0.0024 | 4.61 | 7200 | 0.2087 | 0.9667 |
| 0.0003 | 4.64 | 7250 | 0.1992 | 0.9733 |
| 0.0 | 4.67 | 7300 | 0.2050 | 0.9667 |
| 0.0037 | 4.7 | 7350 | 0.1899 | 0.9733 |
| 0.0109 | 4.73 | 7400 | 0.1832 | 0.9733 |
| 0.0108 | 4.77 | 7450 | 0.1861 | 0.9733 |
| 0.0159 | 4.8 | 7500 | 0.1795 | 0.9733 |
| 0.004 | 4.83 | 7550 | 0.1767 | 0.9733 |
| 0.0012 | 4.86 | 7600 | 0.1888 | 0.9733 |
| 0.0076 | 4.89 | 7650 | 0.1894 | 0.9733 |
| 0.0113 | 4.93 | 7700 | 0.1870 | 0.9733 |
| 0.0007 | 4.96 | 7750 | 0.1869 | 0.9733 |
| 0.0099 | 4.99 | 7800 | 0.1867 | 0.9733 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.2
|
joweyel/sd-class-butterflies-32
|
joweyel
| 2022-11-28T23:54:45Z | 37 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T23:51:15Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of (more or less) cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(datboi223/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
Serhio/sd-fine-tune-v2
|
Serhio
| 2022-11-28T23:43:18Z | 34 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-28T23:41:46Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### sd-fine-tune-v2 on Stable Diffusion via Dreambooth
#### model by Serhio
This your the Stable Diffusion model fine-tuned the sd-fine-tune-v2 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **Bashkov Sergey**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
|
pig4431/TweetEval_BERT_5E
|
pig4431
| 2022-11-28T23:38:03Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T23:31:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: TweetEval_BERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: train
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.9266666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetEval_BERT_5E
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5419
- Accuracy: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6264 | 0.04 | 50 | 0.5266 | 0.74 |
| 0.5054 | 0.08 | 100 | 0.5959 | 0.6333 |
| 0.4732 | 0.12 | 150 | 0.3524 | 0.86 |
| 0.3916 | 0.16 | 200 | 0.3195 | 0.8667 |
| 0.3477 | 0.2 | 250 | 0.2878 | 0.8867 |
| 0.3116 | 0.24 | 300 | 0.2903 | 0.92 |
| 0.3039 | 0.28 | 350 | 0.2488 | 0.8933 |
| 0.2633 | 0.32 | 400 | 0.2530 | 0.92 |
| 0.2667 | 0.37 | 450 | 0.2125 | 0.9267 |
| 0.2604 | 0.41 | 500 | 0.2628 | 0.8867 |
| 0.278 | 0.45 | 550 | 0.2322 | 0.8867 |
| 0.2625 | 0.49 | 600 | 0.1903 | 0.92 |
| 0.2808 | 0.53 | 650 | 0.2400 | 0.8933 |
| 0.2396 | 0.57 | 700 | 0.2184 | 0.9067 |
| 0.2571 | 0.61 | 750 | 0.1906 | 0.9133 |
| 0.2676 | 0.65 | 800 | 0.2467 | 0.9067 |
| 0.2288 | 0.69 | 850 | 0.2038 | 0.9133 |
| 0.2959 | 0.73 | 900 | 0.1941 | 0.9 |
| 0.2619 | 0.77 | 950 | 0.2100 | 0.9333 |
| 0.2504 | 0.81 | 1000 | 0.1523 | 0.9333 |
| 0.2338 | 0.85 | 1050 | 0.1429 | 0.94 |
| 0.2529 | 0.89 | 1100 | 0.1269 | 0.94 |
| 0.2238 | 0.93 | 1150 | 0.1722 | 0.9333 |
| 0.2295 | 0.97 | 1200 | 0.1874 | 0.94 |
| 0.2089 | 1.01 | 1250 | 0.2214 | 0.9067 |
| 0.1406 | 1.06 | 1300 | 0.3410 | 0.9133 |
| 0.1587 | 1.1 | 1350 | 0.3330 | 0.9133 |
| 0.1732 | 1.14 | 1400 | 0.2716 | 0.9133 |
| 0.195 | 1.18 | 1450 | 0.3726 | 0.92 |
| 0.1777 | 1.22 | 1500 | 0.2430 | 0.9267 |
| 0.1433 | 1.26 | 1550 | 0.3011 | 0.9267 |
| 0.1333 | 1.3 | 1600 | 0.2489 | 0.9333 |
| 0.1516 | 1.34 | 1650 | 0.3340 | 0.9267 |
| 0.1774 | 1.38 | 1700 | 0.2497 | 0.8933 |
| 0.1608 | 1.42 | 1750 | 0.3234 | 0.9 |
| 0.1534 | 1.46 | 1800 | 0.3383 | 0.9133 |
| 0.1287 | 1.5 | 1850 | 0.3134 | 0.9133 |
| 0.1422 | 1.54 | 1900 | 0.3330 | 0.9 |
| 0.1578 | 1.58 | 1950 | 0.3281 | 0.9133 |
| 0.1786 | 1.62 | 2000 | 0.2939 | 0.9267 |
| 0.2019 | 1.66 | 2050 | 0.3535 | 0.9 |
| 0.1995 | 1.7 | 2100 | 0.3032 | 0.9067 |
| 0.159 | 1.75 | 2150 | 0.2598 | 0.9267 |
| 0.1493 | 1.79 | 2200 | 0.2391 | 0.9267 |
| 0.1748 | 1.83 | 2250 | 0.2258 | 0.92 |
| 0.1783 | 1.87 | 2300 | 0.2749 | 0.9133 |
| 0.1619 | 1.91 | 2350 | 0.2699 | 0.92 |
| 0.1378 | 1.95 | 2400 | 0.2776 | 0.9067 |
| 0.1529 | 1.99 | 2450 | 0.2235 | 0.9333 |
| 0.1071 | 2.03 | 2500 | 0.2841 | 0.9267 |
| 0.0812 | 2.07 | 2550 | 0.3178 | 0.9267 |
| 0.0464 | 2.11 | 2600 | 0.3567 | 0.92 |
| 0.1108 | 2.15 | 2650 | 0.2723 | 0.92 |
| 0.0845 | 2.19 | 2700 | 0.2774 | 0.9267 |
| 0.0795 | 2.23 | 2750 | 0.3027 | 0.9267 |
| 0.0403 | 2.27 | 2800 | 0.3566 | 0.9267 |
| 0.0664 | 2.31 | 2850 | 0.4015 | 0.92 |
| 0.0659 | 2.35 | 2900 | 0.4298 | 0.9067 |
| 0.1059 | 2.39 | 2950 | 0.4028 | 0.92 |
| 0.105 | 2.44 | 3000 | 0.3701 | 0.92 |
| 0.0808 | 2.48 | 3050 | 0.3206 | 0.9267 |
| 0.0811 | 2.52 | 3100 | 0.3644 | 0.9133 |
| 0.0458 | 2.56 | 3150 | 0.3781 | 0.9267 |
| 0.0764 | 2.6 | 3200 | 0.3749 | 0.9267 |
| 0.0567 | 2.64 | 3250 | 0.3995 | 0.92 |
| 0.0971 | 2.68 | 3300 | 0.3455 | 0.92 |
| 0.0579 | 2.72 | 3350 | 0.4508 | 0.92 |
| 0.0853 | 2.76 | 3400 | 0.4350 | 0.92 |
| 0.0577 | 2.8 | 3450 | 0.3804 | 0.9333 |
| 0.0732 | 2.84 | 3500 | 0.4387 | 0.92 |
| 0.0874 | 2.88 | 3550 | 0.3885 | 0.9333 |
| 0.1031 | 2.92 | 3600 | 0.3937 | 0.92 |
| 0.0335 | 2.96 | 3650 | 0.4963 | 0.8933 |
| 0.0913 | 3.0 | 3700 | 0.3827 | 0.9333 |
| 0.047 | 3.04 | 3750 | 0.4136 | 0.92 |
| 0.0531 | 3.08 | 3800 | 0.4362 | 0.92 |
| 0.0265 | 3.12 | 3850 | 0.4857 | 0.92 |
| 0.038 | 3.17 | 3900 | 0.4425 | 0.92 |
| 0.0294 | 3.21 | 3950 | 0.4347 | 0.92 |
| 0.0367 | 3.25 | 4000 | 0.4291 | 0.9333 |
| 0.0102 | 3.29 | 4050 | 0.5178 | 0.9267 |
| 0.0311 | 3.33 | 4100 | 0.4784 | 0.9267 |
| 0.0274 | 3.37 | 4150 | 0.5421 | 0.9267 |
| 0.0275 | 3.41 | 4200 | 0.5194 | 0.92 |
| 0.0795 | 3.45 | 4250 | 0.4788 | 0.92 |
| 0.0413 | 3.49 | 4300 | 0.4393 | 0.9267 |
| 0.0373 | 3.53 | 4350 | 0.4965 | 0.92 |
| 0.0303 | 3.57 | 4400 | 0.4284 | 0.9267 |
| 0.0248 | 3.61 | 4450 | 0.4476 | 0.9267 |
| 0.0557 | 3.65 | 4500 | 0.4690 | 0.92 |
| 0.0358 | 3.69 | 4550 | 0.4774 | 0.9133 |
| 0.0194 | 3.73 | 4600 | 0.4755 | 0.92 |
| 0.0473 | 3.77 | 4650 | 0.4637 | 0.92 |
| 0.0133 | 3.81 | 4700 | 0.4868 | 0.92 |
| 0.0204 | 3.86 | 4750 | 0.4886 | 0.9267 |
| 0.0338 | 3.9 | 4800 | 0.5101 | 0.9267 |
| 0.0424 | 3.94 | 4850 | 0.4812 | 0.9267 |
| 0.0237 | 3.98 | 4900 | 0.4837 | 0.9267 |
| 0.0372 | 4.02 | 4950 | 0.5000 | 0.9267 |
| 0.0254 | 4.06 | 5000 | 0.5210 | 0.92 |
| 0.024 | 4.1 | 5050 | 0.5272 | 0.92 |
| 0.0117 | 4.14 | 5100 | 0.5447 | 0.92 |
| 0.018 | 4.18 | 5150 | 0.5353 | 0.92 |
| 0.0097 | 4.22 | 5200 | 0.5415 | 0.9267 |
| 0.0151 | 4.26 | 5250 | 0.5447 | 0.9267 |
| 0.0118 | 4.3 | 5300 | 0.5285 | 0.9267 |
| 0.0004 | 4.34 | 5350 | 0.5399 | 0.9267 |
| 0.0102 | 4.38 | 5400 | 0.5552 | 0.9267 |
| 0.0012 | 4.42 | 5450 | 0.5689 | 0.92 |
| 0.02 | 4.46 | 5500 | 0.5619 | 0.9267 |
| 0.0056 | 4.5 | 5550 | 0.5784 | 0.92 |
| 0.0271 | 4.55 | 5600 | 0.5766 | 0.92 |
| 0.0191 | 4.59 | 5650 | 0.5662 | 0.92 |
| 0.0311 | 4.63 | 5700 | 0.5514 | 0.9267 |
| 0.0167 | 4.67 | 5750 | 0.5510 | 0.9267 |
| 0.0293 | 4.71 | 5800 | 0.5571 | 0.9267 |
| 0.0304 | 4.75 | 5850 | 0.5494 | 0.92 |
| 0.0161 | 4.79 | 5900 | 0.5469 | 0.9267 |
| 0.0017 | 4.83 | 5950 | 0.5468 | 0.9267 |
| 0.0176 | 4.87 | 6000 | 0.5426 | 0.9267 |
| 0.0094 | 4.91 | 6050 | 0.5402 | 0.9267 |
| 0.0041 | 4.95 | 6100 | 0.5416 | 0.9267 |
| 0.0281 | 4.99 | 6150 | 0.5419 | 0.9267 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.2
|
jiping/whisper-small-jsun2-hi
|
jiping
| 2022-11-28T22:38:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-24T21:04:14Z |
---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Jsun Hi - Jiping
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 31.761618555828324
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Jsun Hi - Jiping
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2775
- Wer: 31.7616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2092 | 0.61 | 1000 | 0.3201 | 38.7666 |
| 0.1106 | 1.22 | 2000 | 0.2810 | 34.1023 |
| 0.1049 | 1.83 | 3000 | 0.2660 | 32.4812 |
| 0.052 | 2.45 | 4000 | 0.2775 | 31.7616 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Hamaru/MJV4_Hypernetwork
|
Hamaru
| 2022-11-28T22:33:40Z | 0 | 12 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-28T22:03:24Z |
---
license: creativeml-openrail-m
---
Hypernetwork trained on some Midjourney V4 portraits.
Euler a and DPM++ samplers work best. CFG scale at 7 and low step count (<50) work well.
Prompts should include words like "portrait", "octane render" and "highly detailed" for best results.
Avoid using face restoration like GFPGAN or CodeFormer if possible
|
ali97/sd-class-butterflies-32
|
ali97
| 2022-11-28T22:31:50Z | 37 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T22:31:00Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(ali97/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
kanixwang/my-awesome-setfit-model
|
kanixwang
| 2022-11-28T22:19:56Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-28T22:02:13Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
alryan1478/gpt-neo-125M-DOD-LOW
|
alryan1478
| 2022-11-28T22:19:47Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-28T21:59:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125M-DOD-LOW
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125M-DOD-LOW
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 261 | 6.4768 |
| 6.8863 | 2.0 | 522 | 6.1056 |
| 6.8863 | 3.0 | 783 | 6.0427 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
futuredatascience/action-classifier-v1
|
futuredatascience
| 2022-11-28T22:17:56Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-28T22:17:44Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 105 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1050,
"warmup_steps": 105,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
pig4431/TUF_ALBERT_5E
|
pig4431
| 2022-11-28T21:34:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T21:32:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TUF_ALBERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TUF_ALBERT_5E
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2389
- Accuracy: 0.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5099 | 0.1 | 50 | 0.3861 | 0.8533 |
| 0.2985 | 0.2 | 100 | 0.2961 | 0.8933 |
| 0.2972 | 0.3 | 150 | 0.2335 | 0.9333 |
| 0.2835 | 0.4 | 200 | 0.1872 | 0.94 |
| 0.26 | 0.5 | 250 | 0.4147 | 0.9133 |
| 0.2986 | 0.59 | 300 | 0.2080 | 0.9267 |
| 0.2554 | 0.69 | 350 | 0.3984 | 0.9133 |
| 0.2306 | 0.79 | 400 | 0.2136 | 0.9333 |
| 0.2218 | 0.89 | 450 | 0.4455 | 0.8867 |
| 0.2113 | 0.99 | 500 | 0.2205 | 0.94 |
| 0.2541 | 1.09 | 550 | 0.1705 | 0.9333 |
| 0.1947 | 1.19 | 600 | 0.3264 | 0.8933 |
| 0.2409 | 1.29 | 650 | 0.2084 | 0.92 |
| 0.1968 | 1.39 | 700 | 0.2550 | 0.9267 |
| 0.172 | 1.49 | 750 | 0.2238 | 0.9467 |
| 0.1478 | 1.58 | 800 | 0.2501 | 0.9533 |
| 0.2199 | 1.68 | 850 | 0.2618 | 0.9133 |
| 0.1792 | 1.78 | 900 | 0.2109 | 0.9267 |
| 0.1831 | 1.88 | 950 | 0.2641 | 0.92 |
| 0.1534 | 1.98 | 1000 | 0.1924 | 0.94 |
| 0.1208 | 2.08 | 1050 | 0.2990 | 0.9333 |
| 0.1118 | 2.18 | 1100 | 0.4952 | 0.9 |
| 0.158 | 2.28 | 1150 | 0.1706 | 0.9533 |
| 0.1163 | 2.38 | 1200 | 0.1238 | 0.9733 |
| 0.1738 | 2.48 | 1250 | 0.1989 | 0.9467 |
| 0.1305 | 2.57 | 1300 | 0.4354 | 0.9067 |
| 0.1668 | 2.67 | 1350 | 0.1276 | 0.9667 |
| 0.1195 | 2.77 | 1400 | 0.2170 | 0.9533 |
| 0.1057 | 2.87 | 1450 | 0.2882 | 0.9333 |
| 0.1172 | 2.97 | 1500 | 0.1435 | 0.9667 |
| 0.0893 | 3.07 | 1550 | 0.1754 | 0.96 |
| 0.0582 | 3.17 | 1600 | 0.1858 | 0.96 |
| 0.0887 | 3.27 | 1650 | 0.4954 | 0.92 |
| 0.1166 | 3.37 | 1700 | 0.2356 | 0.9467 |
| 0.0518 | 3.47 | 1750 | 0.1910 | 0.96 |
| 0.0741 | 3.56 | 1800 | 0.1328 | 0.9733 |
| 0.072 | 3.66 | 1850 | 0.2769 | 0.9467 |
| 0.0534 | 3.76 | 1900 | 0.3501 | 0.94 |
| 0.0776 | 3.86 | 1950 | 0.3171 | 0.94 |
| 0.0537 | 3.96 | 2000 | 0.2138 | 0.9533 |
| 0.0683 | 4.06 | 2050 | 0.2934 | 0.94 |
| 0.015 | 4.16 | 2100 | 0.2233 | 0.9533 |
| 0.0236 | 4.26 | 2150 | 0.2673 | 0.9533 |
| 0.0357 | 4.36 | 2200 | 0.2279 | 0.96 |
| 0.0298 | 4.46 | 2250 | 0.3017 | 0.9467 |
| 0.0357 | 4.55 | 2300 | 0.2910 | 0.9467 |
| 0.0208 | 4.65 | 2350 | 0.2498 | 0.9533 |
| 0.0345 | 4.75 | 2400 | 0.2259 | 0.9667 |
| 0.0174 | 4.85 | 2450 | 0.2274 | 0.9667 |
| 0.0393 | 4.95 | 2500 | 0.2389 | 0.9533 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Inayat/Fine_tune_whisper_small
|
Inayat
| 2022-11-28T21:14:32Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-14T19:18:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Fine_tune_whisper_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine_tune_whisper_small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8238
- Wer: 42.9362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 900
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2994 | 3.92 | 200 | 0.6607 | 44.0797 |
| 0.0201 | 7.84 | 400 | 0.7371 | 42.6042 |
| 0.002 | 11.76 | 600 | 0.8027 | 42.5304 |
| 0.0011 | 15.69 | 800 | 0.8238 | 42.9362 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pig4431/TUF_BERT_5E
|
pig4431
| 2022-11-28T21:13:00Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T21:06:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TUF_BERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TUF_BERT_5E
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3251
- Accuracy: 0.9467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4078 | 0.1 | 50 | 0.2430 | 0.92 |
| 0.2488 | 0.2 | 100 | 0.1465 | 0.94 |
| 0.1966 | 0.3 | 150 | 0.1284 | 0.96 |
| 0.2096 | 0.4 | 200 | 0.2879 | 0.9067 |
| 0.2015 | 0.5 | 250 | 0.1629 | 0.9467 |
| 0.1692 | 0.59 | 300 | 0.2165 | 0.9133 |
| 0.1794 | 0.69 | 350 | 0.1535 | 0.9533 |
| 0.1975 | 0.79 | 400 | 0.1429 | 0.9333 |
| 0.1394 | 0.89 | 450 | 0.2384 | 0.92 |
| 0.191 | 0.99 | 500 | 0.2198 | 0.94 |
| 0.0907 | 1.09 | 550 | 0.1270 | 0.9467 |
| 0.073 | 1.19 | 600 | 0.2016 | 0.94 |
| 0.1594 | 1.29 | 650 | 0.2078 | 0.9267 |
| 0.087 | 1.39 | 700 | 0.3312 | 0.9333 |
| 0.0961 | 1.49 | 750 | 0.3704 | 0.92 |
| 0.1225 | 1.58 | 800 | 0.1686 | 0.9467 |
| 0.0969 | 1.68 | 850 | 0.1525 | 0.9333 |
| 0.0942 | 1.78 | 900 | 0.1924 | 0.94 |
| 0.0681 | 1.88 | 950 | 0.1825 | 0.9467 |
| 0.1295 | 1.98 | 1000 | 0.1360 | 0.9333 |
| 0.0626 | 2.08 | 1050 | 0.2014 | 0.94 |
| 0.0372 | 2.18 | 1100 | 0.2030 | 0.9467 |
| 0.0077 | 2.28 | 1150 | 0.2615 | 0.9467 |
| 0.0393 | 2.38 | 1200 | 0.4256 | 0.9267 |
| 0.0492 | 2.48 | 1250 | 0.3057 | 0.94 |
| 0.0184 | 2.57 | 1300 | 0.1308 | 0.9733 |
| 0.0209 | 2.67 | 1350 | 0.2848 | 0.9467 |
| 0.0328 | 2.77 | 1400 | 0.1862 | 0.96 |
| 0.0333 | 2.87 | 1450 | 0.2347 | 0.96 |
| 0.0527 | 2.97 | 1500 | 0.3855 | 0.9333 |
| 0.0685 | 3.07 | 1550 | 0.3174 | 0.94 |
| 0.0217 | 3.17 | 1600 | 0.2320 | 0.9533 |
| 0.0036 | 3.27 | 1650 | 0.3219 | 0.9333 |
| 0.0015 | 3.37 | 1700 | 0.1649 | 0.9733 |
| 0.0177 | 3.47 | 1750 | 0.3785 | 0.94 |
| 0.0142 | 3.56 | 1800 | 0.1420 | 0.9733 |
| 0.0319 | 3.66 | 1850 | 0.4057 | 0.9333 |
| 0.0254 | 3.76 | 1900 | 0.1824 | 0.96 |
| 0.0092 | 3.86 | 1950 | 0.2400 | 0.9533 |
| 0.0306 | 3.96 | 2000 | 0.2238 | 0.96 |
| 0.0118 | 4.06 | 2050 | 0.2623 | 0.9533 |
| 0.0097 | 4.16 | 2100 | 0.3642 | 0.9467 |
| 0.0132 | 4.26 | 2150 | 0.3235 | 0.9467 |
| 0.0155 | 4.36 | 2200 | 0.3535 | 0.9467 |
| 0.0043 | 4.46 | 2250 | 0.3236 | 0.9467 |
| 0.0004 | 4.55 | 2300 | 0.2984 | 0.9467 |
| 0.009 | 4.65 | 2350 | 0.2941 | 0.9467 |
| 0.0068 | 4.75 | 2400 | 0.2936 | 0.9467 |
| 0.0102 | 4.85 | 2450 | 0.3138 | 0.9467 |
| 0.0015 | 4.95 | 2500 | 0.3251 | 0.9467 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
pig4431/TweetEval_DistilBERT_5E
|
pig4431
| 2022-11-28T21:09:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T21:03:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: TweetEval_DistilBERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: train
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.9133333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TweetEval_DistilBERT_5E
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4043
- Accuracy: 0.9133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5747 | 0.04 | 50 | 0.4843 | 0.7333 |
| 0.4336 | 0.08 | 100 | 0.2888 | 0.8667 |
| 0.3437 | 0.12 | 150 | 0.2895 | 0.8667 |
| 0.3375 | 0.16 | 200 | 0.2864 | 0.8733 |
| 0.3072 | 0.2 | 250 | 0.2577 | 0.8867 |
| 0.3019 | 0.24 | 300 | 0.2574 | 0.8933 |
| 0.2662 | 0.28 | 350 | 0.2621 | 0.8867 |
| 0.283 | 0.32 | 400 | 0.2340 | 0.92 |
| 0.2949 | 0.37 | 450 | 0.2482 | 0.8933 |
| 0.3066 | 0.41 | 500 | 0.2537 | 0.9 |
| 0.2457 | 0.45 | 550 | 0.2473 | 0.9 |
| 0.295 | 0.49 | 600 | 0.2177 | 0.9133 |
| 0.2862 | 0.53 | 650 | 0.2215 | 0.9133 |
| 0.2603 | 0.57 | 700 | 0.2272 | 0.9133 |
| 0.2976 | 0.61 | 750 | 0.2298 | 0.9067 |
| 0.2823 | 0.65 | 800 | 0.2451 | 0.8933 |
| 0.2583 | 0.69 | 850 | 0.2645 | 0.8933 |
| 0.2694 | 0.73 | 900 | 0.2352 | 0.9 |
| 0.2433 | 0.77 | 950 | 0.2322 | 0.9133 |
| 0.2598 | 0.81 | 1000 | 0.2300 | 0.9 |
| 0.2701 | 0.85 | 1050 | 0.2162 | 0.9 |
| 0.2227 | 0.89 | 1100 | 0.2135 | 0.8933 |
| 0.2045 | 0.93 | 1150 | 0.2233 | 0.9133 |
| 0.2821 | 0.97 | 1200 | 0.2194 | 0.9 |
| 0.2342 | 1.01 | 1250 | 0.2488 | 0.88 |
| 0.2028 | 1.06 | 1300 | 0.2451 | 0.8867 |
| 0.1509 | 1.1 | 1350 | 0.3174 | 0.88 |
| 0.1888 | 1.14 | 1400 | 0.2537 | 0.9133 |
| 0.1825 | 1.18 | 1450 | 0.2559 | 0.9067 |
| 0.1721 | 1.22 | 1500 | 0.2511 | 0.92 |
| 0.2137 | 1.26 | 1550 | 0.2963 | 0.9133 |
| 0.2153 | 1.3 | 1600 | 0.2210 | 0.92 |
| 0.1989 | 1.34 | 1650 | 0.2231 | 0.9133 |
| 0.2155 | 1.38 | 1700 | 0.1991 | 0.9133 |
| 0.1912 | 1.42 | 1750 | 0.2146 | 0.92 |
| 0.1623 | 1.46 | 1800 | 0.2721 | 0.9 |
| 0.2236 | 1.5 | 1850 | 0.2301 | 0.9267 |
| 0.1907 | 1.54 | 1900 | 0.1988 | 0.92 |
| 0.1286 | 1.58 | 1950 | 0.2326 | 0.9 |
| 0.2147 | 1.62 | 2000 | 0.2432 | 0.9267 |
| 0.2018 | 1.66 | 2050 | 0.2162 | 0.9067 |
| 0.2073 | 1.7 | 2100 | 0.2153 | 0.9133 |
| 0.1498 | 1.75 | 2150 | 0.2335 | 0.92 |
| 0.1812 | 1.79 | 2200 | 0.2275 | 0.9267 |
| 0.1482 | 1.83 | 2250 | 0.2734 | 0.9 |
| 0.2233 | 1.87 | 2300 | 0.2454 | 0.9 |
| 0.1673 | 1.91 | 2350 | 0.2394 | 0.92 |
| 0.1555 | 1.95 | 2400 | 0.2725 | 0.92 |
| 0.2082 | 1.99 | 2450 | 0.2684 | 0.9133 |
| 0.1545 | 2.03 | 2500 | 0.3049 | 0.9067 |
| 0.1384 | 2.07 | 2550 | 0.2960 | 0.9133 |
| 0.1201 | 2.11 | 2600 | 0.3259 | 0.9 |
| 0.1348 | 2.15 | 2650 | 0.3091 | 0.9133 |
| 0.1046 | 2.19 | 2700 | 0.2916 | 0.9267 |
| 0.1506 | 2.23 | 2750 | 0.2910 | 0.9133 |
| 0.1481 | 2.27 | 2800 | 0.2855 | 0.9067 |
| 0.1318 | 2.31 | 2850 | 0.3075 | 0.9 |
| 0.1204 | 2.35 | 2900 | 0.3169 | 0.8933 |
| 0.1669 | 2.39 | 2950 | 0.3050 | 0.9067 |
| 0.1725 | 2.44 | 3000 | 0.2970 | 0.9133 |
| 0.1305 | 2.48 | 3050 | 0.3065 | 0.9 |
| 0.1508 | 2.52 | 3100 | 0.3079 | 0.9133 |
| 0.184 | 2.56 | 3150 | 0.3482 | 0.9067 |
| 0.1263 | 2.6 | 3200 | 0.3310 | 0.9 |
| 0.1282 | 2.64 | 3250 | 0.3520 | 0.8933 |
| 0.1217 | 2.68 | 3300 | 0.3158 | 0.9067 |
| 0.1203 | 2.72 | 3350 | 0.3351 | 0.92 |
| 0.1068 | 2.76 | 3400 | 0.3239 | 0.92 |
| 0.1517 | 2.8 | 3450 | 0.3247 | 0.92 |
| 0.113 | 2.84 | 3500 | 0.3269 | 0.9133 |
| 0.1276 | 2.88 | 3550 | 0.3162 | 0.92 |
| 0.1548 | 2.92 | 3600 | 0.3196 | 0.9133 |
| 0.1305 | 2.96 | 3650 | 0.3163 | 0.92 |
| 0.149 | 3.0 | 3700 | 0.3013 | 0.92 |
| 0.0816 | 3.04 | 3750 | 0.3097 | 0.9267 |
| 0.0884 | 3.08 | 3800 | 0.3028 | 0.92 |
| 0.0727 | 3.12 | 3850 | 0.3487 | 0.9133 |
| 0.1018 | 3.17 | 3900 | 0.3447 | 0.92 |
| 0.1266 | 3.21 | 3950 | 0.3589 | 0.9133 |
| 0.1216 | 3.25 | 4000 | 0.3464 | 0.92 |
| 0.091 | 3.29 | 4050 | 0.3454 | 0.92 |
| 0.0829 | 3.33 | 4100 | 0.3450 | 0.92 |
| 0.1084 | 3.37 | 4150 | 0.3670 | 0.92 |
| 0.0754 | 3.41 | 4200 | 0.3661 | 0.92 |
| 0.094 | 3.45 | 4250 | 0.3588 | 0.9067 |
| 0.0641 | 3.49 | 4300 | 0.3936 | 0.92 |
| 0.1138 | 3.53 | 4350 | 0.3616 | 0.92 |
| 0.0744 | 3.57 | 4400 | 0.3562 | 0.92 |
| 0.0697 | 3.61 | 4450 | 0.3532 | 0.9267 |
| 0.1083 | 3.65 | 4500 | 0.3451 | 0.9267 |
| 0.0701 | 3.69 | 4550 | 0.3307 | 0.92 |
| 0.0849 | 3.73 | 4600 | 0.3797 | 0.92 |
| 0.09 | 3.77 | 4650 | 0.3746 | 0.9267 |
| 0.0799 | 3.81 | 4700 | 0.3799 | 0.92 |
| 0.0589 | 3.86 | 4750 | 0.3805 | 0.92 |
| 0.0578 | 3.9 | 4800 | 0.3910 | 0.9133 |
| 0.0816 | 3.94 | 4850 | 0.3856 | 0.9133 |
| 0.1366 | 3.98 | 4900 | 0.3707 | 0.92 |
| 0.0846 | 4.02 | 4950 | 0.3802 | 0.92 |
| 0.0401 | 4.06 | 5000 | 0.3842 | 0.92 |
| 0.0851 | 4.1 | 5050 | 0.3773 | 0.9267 |
| 0.0514 | 4.14 | 5100 | 0.3922 | 0.9133 |
| 0.0909 | 4.18 | 5150 | 0.3893 | 0.92 |
| 0.0764 | 4.22 | 5200 | 0.3818 | 0.9133 |
| 0.1208 | 4.26 | 5250 | 0.4096 | 0.92 |
| 0.0689 | 4.3 | 5300 | 0.3940 | 0.9133 |
| 0.0524 | 4.34 | 5350 | 0.4020 | 0.9133 |
| 0.0733 | 4.38 | 5400 | 0.4002 | 0.9133 |
| 0.0699 | 4.42 | 5450 | 0.4013 | 0.9133 |
| 0.0712 | 4.46 | 5500 | 0.4037 | 0.9067 |
| 0.0557 | 4.5 | 5550 | 0.4121 | 0.92 |
| 0.0679 | 4.55 | 5600 | 0.4067 | 0.9133 |
| 0.0651 | 4.59 | 5650 | 0.4194 | 0.9133 |
| 0.0607 | 4.63 | 5700 | 0.4007 | 0.9133 |
| 0.0676 | 4.67 | 5750 | 0.4013 | 0.9133 |
| 0.0303 | 4.71 | 5800 | 0.3984 | 0.9133 |
| 0.0674 | 4.75 | 5850 | 0.4037 | 0.9133 |
| 0.0842 | 4.79 | 5900 | 0.4072 | 0.9133 |
| 0.0516 | 4.83 | 5950 | 0.4096 | 0.9133 |
| 0.0556 | 4.87 | 6000 | 0.4111 | 0.92 |
| 0.0277 | 4.91 | 6050 | 0.4079 | 0.9133 |
| 0.0629 | 4.95 | 6100 | 0.4053 | 0.9133 |
| 0.0426 | 4.99 | 6150 | 0.4043 | 0.9133 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.3.2
- Tokenizers 0.13.2
|
pig4431/TUF_DistilBERT_5E
|
pig4431
| 2022-11-28T20:13:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T20:05:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TUF_DistilBERT_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TUF_DistilBERT_5E
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1832
- Accuracy: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5092 | 0.1 | 50 | 0.4385 | 0.7533 |
| 0.2807 | 0.2 | 100 | 0.2225 | 0.9 |
| 0.1881 | 0.3 | 150 | 0.1531 | 0.94 |
| 0.1895 | 0.4 | 200 | 0.1426 | 0.94 |
| 0.1995 | 0.5 | 250 | 0.1428 | 0.94 |
| 0.1745 | 0.59 | 300 | 0.1538 | 0.9267 |
| 0.1679 | 0.69 | 350 | 0.1249 | 0.9533 |
| 0.199 | 0.79 | 400 | 0.1327 | 0.9467 |
| 0.1703 | 0.89 | 450 | 0.1488 | 0.92 |
| 0.1541 | 0.99 | 500 | 0.1772 | 0.9467 |
| 0.1436 | 1.09 | 550 | 0.1070 | 0.9667 |
| 0.1463 | 1.19 | 600 | 0.1165 | 0.9467 |
| 0.1309 | 1.29 | 650 | 0.1054 | 0.9733 |
| 0.097 | 1.39 | 700 | 0.1346 | 0.94 |
| 0.1307 | 1.49 | 750 | 0.1477 | 0.9467 |
| 0.1506 | 1.58 | 800 | 0.1311 | 0.9533 |
| 0.1386 | 1.68 | 850 | 0.1165 | 0.9667 |
| 0.1463 | 1.78 | 900 | 0.4207 | 0.9067 |
| 0.1202 | 1.88 | 950 | 0.1528 | 0.9667 |
| 0.1403 | 1.98 | 1000 | 0.1262 | 0.96 |
| 0.073 | 2.08 | 1050 | 0.1459 | 0.96 |
| 0.0713 | 2.18 | 1100 | 0.1747 | 0.9533 |
| 0.0814 | 2.28 | 1150 | 0.1953 | 0.9667 |
| 0.0935 | 2.38 | 1200 | 0.1888 | 0.9533 |
| 0.0685 | 2.48 | 1250 | 0.1562 | 0.9467 |
| 0.1154 | 2.57 | 1300 | 0.1806 | 0.96 |
| 0.1239 | 2.67 | 1350 | 0.1322 | 0.9533 |
| 0.1011 | 2.77 | 1400 | 0.2148 | 0.94 |
| 0.0718 | 2.87 | 1450 | 0.1686 | 0.96 |
| 0.1159 | 2.97 | 1500 | 0.1532 | 0.9533 |
| 0.0516 | 3.07 | 1550 | 0.1888 | 0.96 |
| 0.063 | 3.17 | 1600 | 0.1851 | 0.9467 |
| 0.068 | 3.27 | 1650 | 0.2775 | 0.94 |
| 0.0946 | 3.37 | 1700 | 0.1853 | 0.96 |
| 0.0606 | 3.47 | 1750 | 0.2148 | 0.9467 |
| 0.0663 | 3.56 | 1800 | 0.2091 | 0.9533 |
| 0.0474 | 3.66 | 1850 | 0.1702 | 0.9533 |
| 0.0585 | 3.76 | 1900 | 0.1660 | 0.96 |
| 0.0439 | 3.86 | 1950 | 0.2220 | 0.9533 |
| 0.0758 | 3.96 | 2000 | 0.1834 | 0.96 |
| 0.0497 | 4.06 | 2050 | 0.1707 | 0.9533 |
| 0.0412 | 4.16 | 2100 | 0.1948 | 0.9533 |
| 0.0338 | 4.26 | 2150 | 0.2039 | 0.9533 |
| 0.0796 | 4.36 | 2200 | 0.1797 | 0.9533 |
| 0.0727 | 4.46 | 2250 | 0.1986 | 0.9533 |
| 0.032 | 4.55 | 2300 | 0.1947 | 0.9467 |
| 0.0436 | 4.65 | 2350 | 0.1908 | 0.9467 |
| 0.0205 | 4.75 | 2400 | 0.1806 | 0.96 |
| 0.0326 | 4.85 | 2450 | 0.1835 | 0.96 |
| 0.0404 | 4.95 | 2500 | 0.1832 | 0.96 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
futuredatascience/from-classifier-v1
|
futuredatascience
| 2022-11-28T20:07:27Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-28T20:07:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 53 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 530,
"warmup_steps": 53,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
SwePalm/sd-class-butterflies-32
|
SwePalm
| 2022-11-28T20:01:43Z | 32 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T20:00:51Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of (not so?) cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(SwePalm/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
reubenjohn/stack-overflow-open-status-classifier-pt
|
reubenjohn
| 2022-11-28T20:01:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-16T03:44:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: stack-overflow-open-status-classifier-pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stack-overflow-open-status-classifier-pt
This model is a fine-tuned version of [reubenjohn/stack-overflow-open-status-classifier-pt](https://huggingface.co/reubenjohn/stack-overflow-open-status-classifier-pt) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9448
- eval_runtime: 3.554
- eval_samples_per_second: 28.137
- eval_steps_per_second: 0.563
- epoch: 0.01
- step: 60
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 1
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
motmono/a2c-AntBulletEnv-v0
|
motmono
| 2022-11-28T19:58:24Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-28T19:57:12Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1539.68 +/- 213.96
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pig4431/TUF_roBERTa_5E
|
pig4431
| 2022-11-28T19:55:07Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T19:48:29Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: TUF_roBERTa_5E
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TUF_roBERTa_5E
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2136
- Accuracy: 0.9667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4665 | 0.1 | 50 | 0.2587 | 0.9333 |
| 0.245 | 0.2 | 100 | 0.1355 | 0.96 |
| 0.2079 | 0.3 | 150 | 0.1454 | 0.9533 |
| 0.2098 | 0.4 | 200 | 0.1809 | 0.9533 |
| 0.1637 | 0.5 | 250 | 0.2299 | 0.94 |
| 0.1869 | 0.59 | 300 | 0.1324 | 0.9667 |
| 0.2202 | 0.69 | 350 | 0.1786 | 0.9467 |
| 0.2084 | 0.79 | 400 | 0.1541 | 0.9533 |
| 0.148 | 0.89 | 450 | 0.1790 | 0.9533 |
| 0.1945 | 0.99 | 500 | 0.1168 | 0.9667 |
| 0.1648 | 1.09 | 550 | 0.1153 | 0.96 |
| 0.1099 | 1.19 | 600 | 0.1239 | 0.96 |
| 0.1238 | 1.29 | 650 | 0.1486 | 0.9533 |
| 0.1067 | 1.39 | 700 | 0.1195 | 0.96 |
| 0.1324 | 1.49 | 750 | 0.1134 | 0.96 |
| 0.1128 | 1.58 | 800 | 0.1180 | 0.9667 |
| 0.1406 | 1.68 | 850 | 0.2081 | 0.9533 |
| 0.1516 | 1.78 | 900 | 0.1987 | 0.9533 |
| 0.1537 | 1.88 | 950 | 0.1644 | 0.96 |
| 0.0957 | 1.98 | 1000 | 0.1660 | 0.96 |
| 0.0699 | 2.08 | 1050 | 0.2057 | 0.9533 |
| 0.1007 | 2.18 | 1100 | 0.2336 | 0.9533 |
| 0.0677 | 2.28 | 1150 | 0.2399 | 0.9467 |
| 0.059 | 2.38 | 1200 | 0.2331 | 0.96 |
| 0.1051 | 2.48 | 1250 | 0.1974 | 0.9533 |
| 0.0778 | 2.57 | 1300 | 0.2857 | 0.9467 |
| 0.1099 | 2.67 | 1350 | 0.2641 | 0.9533 |
| 0.0747 | 2.77 | 1400 | 0.2219 | 0.9533 |
| 0.0874 | 2.87 | 1450 | 0.2780 | 0.9533 |
| 0.0675 | 2.97 | 1500 | 0.1993 | 0.96 |
| 0.052 | 3.07 | 1550 | 0.1918 | 0.96 |
| 0.0214 | 3.17 | 1600 | 0.2410 | 0.96 |
| 0.0512 | 3.27 | 1650 | 0.2353 | 0.96 |
| 0.0548 | 3.37 | 1700 | 0.2722 | 0.9533 |
| 0.0554 | 3.47 | 1750 | 0.1593 | 0.9733 |
| 0.0742 | 3.56 | 1800 | 0.2568 | 0.96 |
| 0.064 | 3.66 | 1850 | 0.2358 | 0.96 |
| 0.052 | 3.76 | 1900 | 0.2161 | 0.9667 |
| 0.0349 | 3.86 | 1950 | 0.2497 | 0.96 |
| 0.0868 | 3.96 | 2000 | 0.1834 | 0.9667 |
| 0.0445 | 4.06 | 2050 | 0.2441 | 0.9533 |
| 0.0388 | 4.16 | 2100 | 0.2136 | 0.9667 |
| 0.0484 | 4.26 | 2150 | 0.2114 | 0.9667 |
| 0.0263 | 4.36 | 2200 | 0.2325 | 0.96 |
| 0.0409 | 4.46 | 2250 | 0.2454 | 0.9533 |
| 0.0324 | 4.55 | 2300 | 0.2105 | 0.9667 |
| 0.0295 | 4.65 | 2350 | 0.2118 | 0.9667 |
| 0.0372 | 4.75 | 2400 | 0.2005 | 0.9667 |
| 0.0294 | 4.85 | 2450 | 0.2057 | 0.9667 |
| 0.0354 | 4.95 | 2500 | 0.2136 | 0.9667 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
UKP-SQuARE/tweac_16
|
UKP-SQuARE
| 2022-11-28T19:43:48Z | 102 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"QA",
"en",
"dataset:BoolQ",
"dataset:CommonSenseQA",
"dataset:DROP",
"dataset:DuoRC",
"dataset:HellaSWAG",
"dataset:HotpotQA",
"dataset:HybridQA",
"dataset:NarrativeQA",
"dataset:NaturalQuestionsShort",
"dataset:NewsQA",
"dataset:QAMR",
"dataset:RACE",
"dataset:SearchQA",
"dataset:SIQA",
"dataset:SQuAD",
"dataset:TriviaQA-web",
"arxiv:2104.07081",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-09T18:34:07Z |
---
language:
- en
tags:
- QA
license: cc-by-4.0
datasets:
- BoolQ
- CommonSenseQA
- DROP
- DuoRC
- HellaSWAG
- HotpotQA
- HybridQA
- NarrativeQA
- NaturalQuestionsShort
- NewsQA
- QAMR
- RACE
- SearchQA
- SIQA
- SQuAD
- TriviaQA-web
metrics:
- Accuracy
- Precision
- Recall
- F1
- MRR
- R@3
- R@5
---
BERT for Sequence Classification trained on QA Dataset prediction task.
- Input: question.
- Output: dataset from where that question comes from.
Original paper: TWEAC: Transformer with Extendable QA Agent Classifiers
https://arxiv.org/abs/2104.07081
Datasets used for training:
```
list_datasets = ['BoolQ','CommonSenseQA','DROP','DuoRC','HellaSWAG','HotpotQA','HybridQA','NarrativeQA','NaturalQuestionsShort','NewsQA','QAMR','RACE','SearchQA','SIQA','SQuAD','TriviaQA-web']
```
Results for all datasets:
- Accuracy: 0.7919096825783123
- Precision: 0.731586272892176
- Recall: 0.7919096825783123
- F1: 0.7494425609552463
- MRR: 0.8720871733637521
- R@3: 0.9438690810655046
- R@5: 0.9745318608004427
- Queries/second: 6052.33538824659
Results per dataset:
```
"BoolQ": {
"accuracy": 0.998776758409786,
"mrr": 0.999388379204893,
"r@3": 1.0,
"r@5": 1.0,
"query_per_second": 6978.947907596168,
"precision": 0.8649364406779662,
"recall": 0.998776758409786,
"f1": 0.9270508089696281
},
"CommonSenseQA": {
"accuracy": 0.9247135842880524,
"mrr": 0.9476358338878795,
"r@3": 0.9705400981996727,
"r@5": 0.9705400981996727,
"query_per_second": 5823.984138936813,
"precision": 0.442443226311668,
"recall": 0.9247135842880524,
"f1": 0.5985169491525425
},
"DROP": {
"accuracy": 0.9075083892617449,
"mrr": 0.9378200367399193,
"r@3": 0.9609899328859061,
"r@5": 0.9786073825503355,
"query_per_second": 6440.988897129248,
"precision": 0.8636726546906187,
"recall": 0.9075083892617449,
"f1": 0.8850480670893842
},
"DuoRC": {
"accuracy": 0.5555803405457654,
"mrr": 0.7368963429107307,
"r@3": 0.9092125808610305,
"r@5": 0.9596996059186557,
"query_per_second": 6853.643198794893,
"precision": 0.646814404432133,
"recall": 0.5555803405457654,
"f1": 0.5977360905563778
},
"HellaSWAG": {
"accuracy": 0.998406691894045,
"mrr": 0.9990705702715262,
"r@3": 1.0,
"r@5": 1.0,
"query_per_second": 3091.5012960785157,
"precision": 0.9974134500596896,
"recall": 0.998406691894045,
"f1": 0.9979098238280083
},
"HotpotQA": {
"accuracy": 0.7414435784479837,
"mrr": 0.8435804344945315,
"r@3": 0.9325652321247034,
"r@5": 0.973568281938326,
"query_per_second": 4972.668019223381,
"precision": 0.7352150537634409,
"recall": 0.7414435784479837,
"f1": 0.7383161801923401
},
"HybridQA": {
"accuracy": 0.7934218118869013,
"mrr": 0.8806947764680021,
"r@3": 0.964800923254472,
"r@5": 0.9930755914598961,
"query_per_second": 4886.494046259562,
"precision": 0.7198952879581152,
"recall": 0.7934218118869013,
"f1": 0.7548723579467472
},
"NarrativeQA": {
"accuracy": 0.5623756749076442,
"mrr": 0.7416681781060867,
"r@3": 0.9011082693947144,
"r@5": 0.9580373212086767,
"query_per_second": 7081.067049796865,
"precision": 0.5623224095472628,
"recall": 0.5623756749076442,
"f1": 0.5623490409661377
},
"NaturalQuestionsShort": {
"accuracy": 0.7985353692739171,
"mrr": 0.8743599435345307,
"r@3": 0.9439077594266126,
"r@5": 0.9774072919912745,
"query_per_second": 7136.590426649795,
"precision": 0.7963020509633313,
"recall": 0.7985353692739171,
"f1": 0.7974171464135678
},
"NewsQA": {
"accuracy": 0.5375118708452041,
"mrr": 0.71192075967717,
"r@3": 0.855650522317189,
"r@5": 0.939696106362773,
"query_per_second": 7193.851409052092,
"precision": 0.18757249378624688,
"recall": 0.5375118708452041,
"f1": 0.2780985136961061
},
"QAMR": {
"accuracy": 0.6658497602557272,
"mrr": 0.7969741223377345,
"r@3": 0.9207778369738945,
"r@5": 0.973361747469366,
"query_per_second": 7321.775044800525,
"precision": 0.8654525309881587,
"recall": 0.6658497602557272,
"f1": 0.7526421968624852
},
"RACE": {
"accuracy": 0.8771538617474154,
"mrr": 0.917901778042666,
"r@3": 0.9489154672613015,
"r@5": 0.9693898236367322,
"query_per_second": 6952.225120744351,
"precision": 0.8767983789260385,
"recall": 0.8771538617474154,
"f1": 0.8769760843129306
},
"SearchQA": {
"accuracy": 0.9762073027090695,
"mrr": 0.9865069592101393,
"r@3": 0.9972909305064782,
"r@5": 0.9984687868080094,
"query_per_second": 4031.0193826035634,
"precision": 0.9870191735143503,
"recall": 0.9762073027090695,
"f1": 0.9815834665719192
},
"SIQA": {
"accuracy": 0.9969293756397134,
"mrr": 0.9977823268509042,
"r@3": 0.9979529170931423,
"r@5": 1.0,
"query_per_second": 6711.547709005977,
"precision": 0.9329501915708812,
"recall": 0.9969293756397134,
"f1": 0.9638792676892627
},
"SQuAD": {
"accuracy": 0.550628092881614,
"mrr": 0.7164538452390565,
"r@3": 0.8660068519223448,
"r@5": 0.9366197183098591,
"query_per_second": 7033.420124363291,
"precision": 0.48613678373382624,
"recall": 0.550628092881614,
"f1": 0.5163766175814368
},
"TriviaQA-web": {
"accuracy": 0.7855124582584125,
"mrr": 0.8647404868442627,
"r@3": 0.9321859748266119,
"r@5": 0.9640380169535063,
"query_per_second": 4327.642440910395,
"precision": 0.7404358353510896,
"recall": 0.7855124582584125,
"f1": 0.7623083634550667
},
```
|
altsoph/xlmr-AER
|
altsoph
| 2022-11-28T19:22:35Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"nlp",
"roberta",
"xlmr",
"classifier",
"aer",
"narrative",
"entity recognition",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-27T22:41:15Z |
---
language:
- en
thumbnail: https://raw.githubusercontent.com/altsoph/misc/main/imgs/aer_logo.png
tags:
- nlp
- roberta
- xlmr
- classifier
- aer
- narrative
- entity recognition
license: mit
---
An XLM-Roberta based language model fine-tuned for AER (Actionable Entities Recognition) -- recognition of entities that protagonists could interact with for further plot development.
We used 5K+ locations from 1K interactive text fiction games and extracted textual descriptions of locations and lists of actionable entities in them.
The resulting [BAER dataset is available here](https://github.com/altsoph/BAER). Then we used it to train this model.
The example of usage:
```py
from transformers import AutoModelForTokenClassification, AutoTokenizer, pipeline
MODEL_NAME = "altsoph/xlmr-AER"
text = """This bedroom is extremely spare, with dirty laundry scattered haphazardly all over the floor. Cleaner clothing can be found in the dresser.
A bathroom lies to the south, while a door to the east leads to the living room."""
model = AutoModelForTokenClassification.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
pipe = pipeline("token-classification", model=model, tokenizer=tokenizer, aggregation_strategy="simple", ignore_labels=['O','PAD'])
entities = pipe(text)
print(entities)
```
If you use the model, please cite the following:
```
@inproceedings{Tikhonov-etal-2022-AER,
title = "Actionable Entities Recognition Benchmark for Interactive Fiction",
author = "Alexey Tikhonov and Ivan P. Yamshchikov",
year = "2022",
}
```
|
leonrafael29/bert2bert_uncased_english_to_spanish
|
leonrafael29
| 2022-11-28T18:52:56Z | 13 | 0 |
transformers
|
[
"transformers",
"encoder-decoder",
"text2text-generation",
"translation",
"en",
"es",
"dataset:news_commentary",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-11-28T17:32:46Z |
---
language:
- en
- es
tags:
- translation
datasets:
- news_commentary
metrics:
- bleurt
---
|
Dagar/t5-small-science-papers-NIPS
|
Dagar
| 2022-11-28T18:21:27Z | 107 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-28T18:00:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-science-papers-NIPS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-science-papers-NIPS
This model is a fine-tuned version of [Dagar/t5-small-science-papers](https://huggingface.co/Dagar/t5-small-science-papers) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7566
- Rouge1: 15.7066
- Rouge2: 2.5654
- Rougel: 11.4679
- Rougelsum: 14.4017
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 318 | 5.1856 | 13.7172 | 2.0644 | 10.2189 | 12.838 | 19.0 |
| 5.4522 | 2.0 | 636 | 5.0383 | 15.6211 | 2.1808 | 11.3561 | 14.3054 | 19.0 |
| 5.4522 | 3.0 | 954 | 4.9486 | 15.1659 | 2.3308 | 11.1052 | 13.9456 | 19.0 |
| 5.1254 | 4.0 | 1272 | 4.8851 | 15.716 | 2.4099 | 11.4954 | 14.5099 | 19.0 |
| 4.9794 | 5.0 | 1590 | 4.8456 | 15.5507 | 2.4267 | 11.3867 | 14.3237 | 19.0 |
| 4.9794 | 6.0 | 1908 | 4.8073 | 15.8406 | 2.4254 | 11.6878 | 14.6154 | 19.0 |
| 4.8823 | 7.0 | 2226 | 4.7872 | 15.5554 | 2.4637 | 11.3401 | 14.3183 | 19.0 |
| 4.8338 | 8.0 | 2544 | 4.7680 | 15.4783 | 2.4888 | 11.3364 | 14.2031 | 19.0 |
| 4.8338 | 9.0 | 2862 | 4.7621 | 15.958 | 2.5662 | 11.6139 | 14.6576 | 19.0 |
| 4.7838 | 10.0 | 3180 | 4.7566 | 15.7066 | 2.5654 | 11.4679 | 14.4017 | 19.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
FrancoisDongier/sd-class-butterflies-32
|
FrancoisDongier
| 2022-11-28T18:19:31Z | 34 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T18:16:21Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(FrancoisDongier/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
ashu1318/lilt-en-funsd
|
ashu1318
| 2022-11-28T18:17:59Z | 80 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-28T17:49:59Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8731
- Answer: {'precision': 0.8688915375446961, 'recall': 0.8922888616891065, 'f1': 0.8804347826086957, 'number': 817}
- Header: {'precision': 0.638095238095238, 'recall': 0.5630252100840336, 'f1': 0.5982142857142857, 'number': 119}
- Question: {'precision': 0.9105166051660517, 'recall': 0.9164345403899722, 'f1': 0.9134659879685332, 'number': 1077}
- Overall Precision: 0.8792
- Overall Recall: 0.8857
- Overall F1: 0.8825
- Overall Accuracy: 0.7976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4323 | 10.53 | 200 | 1.0423 | {'precision': 0.8369195922989807, 'recall': 0.9045287637698899, 'f1': 0.8694117647058823, 'number': 817} | {'precision': 0.5405405405405406, 'recall': 0.5042016806722689, 'f1': 0.5217391304347826, 'number': 119} | {'precision': 0.8869323447636701, 'recall': 0.8885793871866295, 'f1': 0.8877551020408162, 'number': 1077} | 0.8471 | 0.8723 | 0.8595 | 0.7981 |
| 0.045 | 21.05 | 400 | 1.2757 | {'precision': 0.8435374149659864, 'recall': 0.9106487148102815, 'f1': 0.8758092995879929, 'number': 817} | {'precision': 0.5795454545454546, 'recall': 0.42857142857142855, 'f1': 0.49275362318840576, 'number': 119} | {'precision': 0.8626943005181347, 'recall': 0.9275766016713092, 'f1': 0.8939597315436242, 'number': 1077} | 0.8430 | 0.8912 | 0.8665 | 0.8026 |
| 0.0133 | 31.58 | 600 | 1.4887 | {'precision': 0.8632075471698113, 'recall': 0.8959608323133414, 'f1': 0.8792792792792793, 'number': 817} | {'precision': 0.6020408163265306, 'recall': 0.4957983193277311, 'f1': 0.543778801843318, 'number': 119} | {'precision': 0.8791887125220459, 'recall': 0.9257195914577531, 'f1': 0.9018543645409318, 'number': 1077} | 0.8596 | 0.8882 | 0.8737 | 0.7983 |
| 0.0051 | 42.11 | 800 | 1.7382 | {'precision': 0.8601645123384254, 'recall': 0.8959608323133414, 'f1': 0.8776978417266187, 'number': 817} | {'precision': 0.5636363636363636, 'recall': 0.5210084033613446, 'f1': 0.5414847161572053, 'number': 119} | {'precision': 0.9032558139534884, 'recall': 0.9015784586815228, 'f1': 0.9024163568773235, 'number': 1077} | 0.8669 | 0.8768 | 0.8718 | 0.7925 |
| 0.004 | 52.63 | 1000 | 1.7599 | {'precision': 0.8307349665924276, 'recall': 0.9130966952264382, 'f1': 0.8699708454810495, 'number': 817} | {'precision': 0.6039603960396039, 'recall': 0.5126050420168067, 'f1': 0.5545454545454545, 'number': 119} | {'precision': 0.8939256572982774, 'recall': 0.9155060352831941, 'f1': 0.9045871559633027, 'number': 1077} | 0.8530 | 0.8907 | 0.8714 | 0.7941 |
| 0.002 | 63.16 | 1200 | 1.8409 | {'precision': 0.8312985571587126, 'recall': 0.9167686658506732, 'f1': 0.8719441210710128, 'number': 817} | {'precision': 0.6074766355140186, 'recall': 0.5462184873949579, 'f1': 0.575221238938053, 'number': 119} | {'precision': 0.8814949863263446, 'recall': 0.8978644382544104, 'f1': 0.8896044158233671, 'number': 1077} | 0.8461 | 0.8847 | 0.8650 | 0.7876 |
| 0.0013 | 73.68 | 1400 | 1.7795 | {'precision': 0.81445523193096, 'recall': 0.9241126070991432, 'f1': 0.8658256880733943, 'number': 817} | {'precision': 0.6237623762376238, 'recall': 0.5294117647058824, 'f1': 0.5727272727272728, 'number': 119} | {'precision': 0.888785046728972, 'recall': 0.883008356545961, 'f1': 0.8858872845831393, 'number': 1077} | 0.8432 | 0.8788 | 0.8606 | 0.7934 |
| 0.0011 | 84.21 | 1600 | 1.8386 | {'precision': 0.8338833883388339, 'recall': 0.9277845777233782, 'f1': 0.8783314020857474, 'number': 817} | {'precision': 0.6597938144329897, 'recall': 0.5378151260504201, 'f1': 0.5925925925925926, 'number': 119} | {'precision': 0.8943985307621671, 'recall': 0.904363974001857, 'f1': 0.8993536472760849, 'number': 1077} | 0.8573 | 0.8922 | 0.8744 | 0.7945 |
| 0.0048 | 94.74 | 1800 | 1.8664 | {'precision': 0.8589595375722543, 'recall': 0.9094247246022031, 'f1': 0.8834720570749108, 'number': 817} | {'precision': 0.6504854368932039, 'recall': 0.5630252100840336, 'f1': 0.6036036036036037, 'number': 119} | {'precision': 0.9003656307129799, 'recall': 0.914577530176416, 'f1': 0.9074159373560571, 'number': 1077} | 0.8705 | 0.8917 | 0.8810 | 0.7927 |
| 0.0004 | 105.26 | 2000 | 1.8672 | {'precision': 0.8634772462077013, 'recall': 0.9057527539779682, 'f1': 0.8841099163679809, 'number': 817} | {'precision': 0.7093023255813954, 'recall': 0.5126050420168067, 'f1': 0.5951219512195123, 'number': 119} | {'precision': 0.8923076923076924, 'recall': 0.9155060352831941, 'f1': 0.9037580201649862, 'number': 1077} | 0.8726 | 0.8877 | 0.8801 | 0.7953 |
| 0.0005 | 115.79 | 2200 | 1.8731 | {'precision': 0.8688915375446961, 'recall': 0.8922888616891065, 'f1': 0.8804347826086957, 'number': 817} | {'precision': 0.638095238095238, 'recall': 0.5630252100840336, 'f1': 0.5982142857142857, 'number': 119} | {'precision': 0.9105166051660517, 'recall': 0.9164345403899722, 'f1': 0.9134659879685332, 'number': 1077} | 0.8792 | 0.8857 | 0.8825 | 0.7976 |
| 0.0002 | 126.32 | 2400 | 1.9408 | {'precision': 0.8408071748878924, 'recall': 0.9179926560587516, 'f1': 0.8777062609713283, 'number': 817} | {'precision': 0.6310679611650486, 'recall': 0.5462184873949579, 'f1': 0.5855855855855856, 'number': 119} | {'precision': 0.9091760299625468, 'recall': 0.9015784586815228, 'f1': 0.9053613053613054, 'number': 1077} | 0.8657 | 0.8872 | 0.8763 | 0.7935 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
kejian/final-filter-again
|
kejian
| 2022-11-28T17:39:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-11-28T01:33:32Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: kejian/final-filter-again
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kejian/final-filter-again
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'filter_threshold': 0.002361,
'is_split_by_sentences': True},
'generation': {'batch_size': 64,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 512},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 512,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'kejian/final-filter-again',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0008,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 5000,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/25z4zfy3
|
mostafahaggag/sd-class-butterflies-32
|
mostafahaggag
| 2022-11-28T17:37:32Z | 34 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T17:37:23Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(mostafahaggag/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
alexziweiwang/retrain_epoch2and3
|
alexziweiwang
| 2022-11-28T17:31:08Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-11-28T17:14:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: retrain_epoch2and3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# retrain_epoch2and3
This model is a fine-tuned version of [alexziweiwang/retrain_first1epoch](https://huggingface.co/alexziweiwang/retrain_first1epoch) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4888
- Acc: 0.24
- Wer: 1.0
- Correct: 48
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:----:|:---:|:-------:|:-----:|:------:|
| No log | 0.02 | 5 | 7.8479 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.6019 | 0.04 | 10 | 7.4765 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.6019 | 0.06 | 15 | 7.1196 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.3222 | 0.08 | 20 | 6.8029 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.3222 | 0.11 | 25 | 6.5210 | 0.24 | 1.0 | 48 | 200 | 200 |
| 6.2645 | 0.13 | 30 | 6.2630 | 0.24 | 1.0 | 48 | 200 | 200 |
| 6.2645 | 0.15 | 35 | 6.0213 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.8699 | 0.17 | 40 | 5.8096 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.8699 | 0.19 | 45 | 5.5831 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.7145 | 0.21 | 50 | 5.3644 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.7145 | 0.23 | 55 | 5.1777 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.3702 | 0.25 | 60 | 5.0257 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.3702 | 0.27 | 65 | 4.8642 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.1896 | 0.3 | 70 | 4.7205 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.1896 | 0.32 | 75 | 4.5846 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.0615 | 0.34 | 80 | 4.4313 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.0615 | 0.36 | 85 | 4.2923 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.5189 | 0.38 | 90 | 4.1662 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.5189 | 0.4 | 95 | 4.0545 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.4911 | 0.42 | 100 | 3.9585 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.4911 | 0.44 | 105 | 3.8489 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.1997 | 0.46 | 110 | 3.7573 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.1997 | 0.48 | 115 | 3.6722 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.7348 | 0.51 | 120 | 3.5844 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.7348 | 0.53 | 125 | 3.4980 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.8042 | 0.55 | 130 | 3.4318 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.8042 | 0.57 | 135 | 3.3690 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.705 | 0.59 | 140 | 3.3126 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.705 | 0.61 | 145 | 3.2630 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.763 | 0.63 | 150 | 3.2063 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.763 | 0.65 | 155 | 3.1562 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.5585 | 0.67 | 160 | 3.1096 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.5585 | 0.7 | 165 | 3.0719 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.213 | 0.72 | 170 | 3.0373 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.213 | 0.74 | 175 | 3.0035 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.2874 | 0.76 | 180 | 2.9712 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.2874 | 0.78 | 185 | 2.9405 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.3327 | 0.8 | 190 | 2.9134 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.3327 | 0.82 | 195 | 2.8910 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.2382 | 0.84 | 200 | 2.8672 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.2382 | 0.86 | 205 | 2.8462 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.0069 | 0.89 | 210 | 2.8260 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.0069 | 0.91 | 215 | 2.8087 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.2288 | 0.93 | 220 | 2.7920 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.2288 | 0.95 | 225 | 2.7750 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.787 | 0.97 | 230 | 2.7557 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.787 | 0.99 | 235 | 2.7367 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.9717 | 1.01 | 240 | 2.7207 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.9717 | 1.03 | 245 | 2.7063 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.9269 | 1.05 | 250 | 2.6939 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.9269 | 1.08 | 255 | 2.6831 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.8771 | 1.1 | 260 | 2.6709 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.8771 | 1.12 | 265 | 2.6594 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.0474 | 1.14 | 270 | 2.6472 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.0474 | 1.16 | 275 | 2.6361 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7652 | 1.18 | 280 | 2.6268 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7652 | 1.2 | 285 | 2.6184 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.8322 | 1.22 | 290 | 2.6106 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.8322 | 1.24 | 295 | 2.6034 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6464 | 1.27 | 300 | 2.5957 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6464 | 1.29 | 305 | 2.5877 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7974 | 1.31 | 310 | 2.5805 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7974 | 1.33 | 315 | 2.5748 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.797 | 1.35 | 320 | 2.5698 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.797 | 1.37 | 325 | 2.5644 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7508 | 1.39 | 330 | 2.5595 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7508 | 1.41 | 335 | 2.5537 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7188 | 1.43 | 340 | 2.5486 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7188 | 1.46 | 345 | 2.5434 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6889 | 1.48 | 350 | 2.5377 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6889 | 1.5 | 355 | 2.5336 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6373 | 1.52 | 360 | 2.5300 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6373 | 1.54 | 365 | 2.5258 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.765 | 1.56 | 370 | 2.5219 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.765 | 1.58 | 375 | 2.5181 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6407 | 1.6 | 380 | 2.5144 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6407 | 1.62 | 385 | 2.5113 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7727 | 1.64 | 390 | 2.5093 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7727 | 1.67 | 395 | 2.5076 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.8091 | 1.69 | 400 | 2.5060 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.8091 | 1.71 | 405 | 2.5042 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7204 | 1.73 | 410 | 2.5027 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7204 | 1.75 | 415 | 2.5011 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6168 | 1.77 | 420 | 2.4987 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6168 | 1.79 | 425 | 2.4965 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6947 | 1.81 | 430 | 2.4947 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6947 | 1.83 | 435 | 2.4932 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7495 | 1.86 | 440 | 2.4921 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7495 | 1.88 | 445 | 2.4911 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7413 | 1.9 | 450 | 2.4904 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.7413 | 1.92 | 455 | 2.4897 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6498 | 1.94 | 460 | 2.4893 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6498 | 1.96 | 465 | 2.4890 | 0.24 | 1.0 | 48 | 200 | 200 |
| 2.6891 | 1.98 | 470 | 2.4888 | 0.24 | 1.0 | 48 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
wa3dbk/whisper-small-hi
|
wa3dbk
| 2022-11-28T17:12:02Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-24T16:19:34Z |
## whisper-small-hi
This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset (language=Hindi).
|
antgrutta/sd-class-butterflies-32
|
antgrutta
| 2022-11-28T16:59:10Z | 32 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T16:58:32Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(antgrutta/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
EmnaBou/bert-finetuned-DT
|
EmnaBou
| 2022-11-28T16:49:12Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-28T15:20:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-DT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-DT
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6697
- Precision: 0.2381
- Recall: 0.0321
- F1: 0.0565
- Accuracy: 0.8179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 99 | 0.7505 | 0.0 | 0.0 | 0.0 | 0.8196 |
| No log | 2.0 | 198 | 0.7033 | 0.0 | 0.0 | 0.0 | 0.8196 |
| No log | 3.0 | 297 | 0.6697 | 0.2381 | 0.0321 | 0.0565 | 0.8179 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
KidTheCat/sd-class-butterflies-32
|
KidTheCat
| 2022-11-28T16:20:46Z | 35 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T16:17:50Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(KidTheCat/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
luisgasco/distilbert-base-uncased-finetuned-emotion
|
luisgasco
| 2022-11-28T16:17:49Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T16:03:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.892
- name: F1
type: f1
value: 0.8873822002431591
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3693
- Accuracy: 0.892
- F1: 0.8874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.5715 | 0.8275 | 0.8047 |
| 0.7552 | 2.0 | 250 | 0.3693 | 0.892 | 0.8874 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AdelZakirov/sd-class-butterflies-42
|
AdelZakirov
| 2022-11-28T15:53:06Z | 35 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T15:52:36Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(AdelZakirov/sd-class-butterflies-42)
image = pipeline().images[0]
image
```
|
lucascruz/ppo_lunarlander
|
lucascruz
| 2022-11-28T15:48:46Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-27T14:13:19Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.73 +/- 20.17
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SYH99999/autotrain-translator-2261971987
|
SYH99999
| 2022-11-28T15:30:31Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"translation",
"ja",
"en",
"dataset:SYH99999/autotrain-data-translator-3c03831c-5fcf2e86-839aa322-a7658498-cb30b55a-eefc0458",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-11-28T11:53:31Z |
---
tags:
- autotrain
- translation
language:
- ja
- en
datasets:
- SYH99999/autotrain-data-translator-3c03831c-5fcf2e86-839aa322-a7658498-cb30b55a-eefc0458
co2_eq_emissions:
emissions: 234.5986254372695
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 2261971987
- CO2 Emissions (in grams): 234.5986
## Validation Metrics
- Loss: 4.237
- SacreBLEU: 0.697
- Gen len: 256.387
|
arrandi/sd-class-butterflies-32
|
arrandi
| 2022-11-28T15:24:36Z | 32 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T15:23:56Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(arrandi/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
ViktorDo/DistilBERT-POWO_MGH_Epiphyte_Finetuned
|
ViktorDo
| 2022-11-28T15:24:34Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T15:08:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DistilBERT-POWO_MGH_Epiphyte_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-POWO_MGH_Epiphyte_Finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0824 | 1.0 | 1931 | 0.0807 |
| 0.0768 | 2.0 | 3862 | 0.0747 |
| 0.0664 | 3.0 | 5793 | 0.0749 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ConvLab/ddpt-policy-0.01multiwoz21
|
ConvLab
| 2022-11-28T15:20:35Z | 0 | 0 | null |
[
"dialogue policy",
"task-oriented dialog",
"en",
"dataset:ConvLab/sgd",
"license:apache-2.0",
"region:us"
] | null | 2022-11-28T15:18:28Z |
---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
datasets:
- ConvLab/sgd
---
# ddpt-policy-0.01multiwoz21
This is a DDPT model (https://aclanthology.org/2022.coling-1.21/) trained on 1 percent of [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21)
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- seed: 0
- optimizer: Adam
- num_epochs: 40
- use checkpoint which performed best on validation set
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
|
ConvLab/mle-policy-multiwoz21
|
ConvLab
| 2022-11-28T15:11:19Z | 0 | 0 | null |
[
"dialogue policy",
"task-oriented dialog",
"en",
"dataset:ConvLab/multiwoz21",
"license:apache-2.0",
"region:us"
] | null | 2022-11-28T15:07:50Z |
---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
datasets:
- ConvLab/multiwoz21
---
# mle-policy-multiwoz21
This is a MLE model trained on [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21)
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- seed: 0
- optimizer: Adam
- num_epochs: 24
- use checkpoint which performed best on validation set
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
|
ViktorDo/DistilBERT-POWO_MGH_Growth_Form_Finetuned
|
ViktorDo
| 2022-11-28T15:04:56Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T14:48:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: DistilBERT-POWO_MGH_Growth_Form_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-POWO_MGH_Growth_Form_Finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2182
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2379 | 1.0 | 2054 | 0.2241 |
| 0.2098 | 2.0 | 4108 | 0.2173 |
| 0.2168 | 3.0 | 6162 | 0.2182 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ConvLab/ddpt-policy-sgd
|
ConvLab
| 2022-11-28T15:01:15Z | 0 | 1 | null |
[
"dialogue policy",
"task-oriented dialog",
"en",
"dataset:ConvLab/sgd",
"license:apache-2.0",
"region:us"
] | null | 2022-11-28T13:21:09Z |
---
language:
- en
license: apache-2.0
tags:
- dialogue policy
- task-oriented dialog
datasets:
- ConvLab/sgd
---
# ddpt-policy-sgd
This is a DDPT model (https://aclanthology.org/2022.coling-1.21/) trained on [Schema-Guided Dialog](https://huggingface.co/datasets/ConvLab/sgd)
Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- seed: 0
- optimizer: Adam
- num_epochs: 1
- use checkpoint which performed best on validation set
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu111
|
alexziweiwang/pure-start-epoch1
|
alexziweiwang
| 2022-11-28T14:49:27Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-11-28T14:32:53Z |
---
tags:
- generated_from_trainer
model-index:
- name: pure-start-epoch1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pure-start-epoch1
This model is a fine-tuned version of [yongjian/wav2vec2-large-a](https://huggingface.co/yongjian/wav2vec2-large-a) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 21.0050
- Acc: 0.095
- Wer: 1.0
- Correct: 19
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:------:|:-------:|:-----:|:------:|
| No log | 0.02 | 5 | 67.2752 | 0.0 | 1.0119 | 0 | 200 | 200 |
| 131.0548 | 0.04 | 10 | 66.2796 | 0.0 | 1.0257 | 0 | 200 | 200 |
| 131.0548 | 0.06 | 15 | 65.2071 | 0.005 | 1.0237 | 1 | 200 | 200 |
| 145.0859 | 0.08 | 20 | 64.0987 | 0.035 | 1.0198 | 7 | 200 | 200 |
| 145.0859 | 0.11 | 25 | 62.9734 | 0.07 | 1.0119 | 14 | 200 | 200 |
| 110.0012 | 0.13 | 30 | 61.8288 | 0.09 | 1.0119 | 18 | 200 | 200 |
| 110.0012 | 0.15 | 35 | 60.6565 | 0.09 | 1.0119 | 18 | 200 | 200 |
| 122.6164 | 0.17 | 40 | 59.4606 | 0.095 | 1.0119 | 19 | 200 | 200 |
| 122.6164 | 0.19 | 45 | 58.2224 | 0.095 | 1.0099 | 19 | 200 | 200 |
| 125.942 | 0.21 | 50 | 56.9514 | 0.095 | 1.0020 | 19 | 200 | 200 |
| 125.942 | 0.23 | 55 | 55.5923 | 0.095 | 1.0 | 19 | 200 | 200 |
| 111.2271 | 0.25 | 60 | 54.1423 | 0.095 | 1.0 | 19 | 200 | 200 |
| 111.2271 | 0.27 | 65 | 52.6174 | 0.095 | 1.0 | 19 | 200 | 200 |
| 137.2356 | 0.3 | 70 | 51.0340 | 0.095 | 1.0 | 19 | 200 | 200 |
| 137.2356 | 0.32 | 75 | 49.4034 | 0.095 | 1.0 | 19 | 200 | 200 |
| 112.2532 | 0.34 | 80 | 47.7291 | 0.095 | 1.0 | 19 | 200 | 200 |
| 112.2532 | 0.36 | 85 | 46.0281 | 0.095 | 1.0 | 19 | 200 | 200 |
| 88.3973 | 0.38 | 90 | 44.2361 | 0.095 | 1.0 | 19 | 200 | 200 |
| 88.3973 | 0.4 | 95 | 42.4925 | 0.095 | 1.0 | 19 | 200 | 200 |
| 88.7175 | 0.42 | 100 | 40.7673 | 0.095 | 1.0 | 19 | 200 | 200 |
| 88.7175 | 0.44 | 105 | 39.0848 | 0.095 | 1.0 | 19 | 200 | 200 |
| 90.857 | 0.46 | 110 | 37.4890 | 0.095 | 1.0 | 19 | 200 | 200 |
| 90.857 | 0.48 | 115 | 35.8966 | 0.095 | 1.0 | 19 | 200 | 200 |
| 77.5782 | 0.51 | 120 | 34.2822 | 0.1 | 1.0 | 20 | 200 | 200 |
| 77.5782 | 0.53 | 125 | 32.7953 | 0.1 | 1.0 | 20 | 200 | 200 |
| 80.2378 | 0.55 | 130 | 31.4560 | 0.1 | 1.0 | 20 | 200 | 200 |
| 80.2378 | 0.57 | 135 | 30.1651 | 0.1 | 1.0 | 20 | 200 | 200 |
| 73.5042 | 0.59 | 140 | 29.0069 | 0.095 | 1.0 | 19 | 200 | 200 |
| 73.5042 | 0.61 | 145 | 28.0349 | 0.095 | 1.0 | 19 | 200 | 200 |
| 71.5632 | 0.63 | 150 | 27.1812 | 0.095 | 1.0 | 19 | 200 | 200 |
| 71.5632 | 0.65 | 155 | 26.4012 | 0.095 | 1.0 | 19 | 200 | 200 |
| 76.5337 | 0.67 | 160 | 25.6924 | 0.095 | 1.0 | 19 | 200 | 200 |
| 76.5337 | 0.7 | 165 | 25.0184 | 0.095 | 1.0 | 19 | 200 | 200 |
| 54.6507 | 0.72 | 170 | 24.4100 | 0.095 | 1.0 | 19 | 200 | 200 |
| 54.6507 | 0.74 | 175 | 23.8273 | 0.095 | 1.0 | 19 | 200 | 200 |
| 57.1606 | 0.76 | 180 | 23.2988 | 0.095 | 1.0 | 19 | 200 | 200 |
| 57.1606 | 0.78 | 185 | 22.8731 | 0.095 | 1.0 | 19 | 200 | 200 |
| 56.0855 | 0.8 | 190 | 22.5336 | 0.095 | 1.0 | 19 | 200 | 200 |
| 56.0855 | 0.82 | 195 | 22.2334 | 0.095 | 1.0 | 19 | 200 | 200 |
| 55.2475 | 0.84 | 200 | 21.9555 | 0.095 | 1.0 | 19 | 200 | 200 |
| 55.2475 | 0.86 | 205 | 21.7112 | 0.095 | 1.0 | 19 | 200 | 200 |
| 47.9988 | 0.89 | 210 | 21.5123 | 0.095 | 1.0 | 19 | 200 | 200 |
| 47.9988 | 0.91 | 215 | 21.3407 | 0.095 | 1.0 | 19 | 200 | 200 |
| 55.1394 | 0.93 | 220 | 21.1965 | 0.095 | 1.0 | 19 | 200 | 200 |
| 55.1394 | 0.95 | 225 | 21.1028 | 0.095 | 1.0 | 19 | 200 | 200 |
| 48.0323 | 0.97 | 230 | 21.0376 | 0.095 | 1.0 | 19 | 200 | 200 |
| 48.0323 | 0.99 | 235 | 21.0050 | 0.095 | 1.0 | 19 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Fabiuas/Animal-classifier
|
Fabiuas
| 2022-11-28T14:38:27Z | 311 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-28T14:37:59Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Animal-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9481481313705444
---
# Animal-classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bee

#### beetle

#### bird

#### butterfly

#### camel

#### cat

#### caterpillar

#### crab

#### dog

#### fly

#### grasshopper

#### horse

#### lizard

#### mosquito

#### mouse

#### snake

#### spider

#### whale

|
renesteeman/whisper-tiny-dutch
|
renesteeman
| 2022-11-28T14:29:27Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"nl",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-28T11:37:40Z |
---
language:
- nl
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Tiny Dutch
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: nl, split: test'
metrics:
- name: Wer
type: wer
value: 42.065535920433355
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Dutch
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7024
- Wer: 42.0655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5563 | 0.78 | 500 | 0.7838 | 47.5002 |
| 0.3949 | 1.56 | 1000 | 0.7301 | 43.9570 |
| 0.2666 | 2.34 | 1500 | 0.7103 | 42.8426 |
| 0.2307 | 3.12 | 2000 | 0.7024 | 42.0655 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fathyshalab/bert-uncased-massive-intent-classification_banking-1
|
fathyshalab
| 2022-11-28T13:48:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-28T13:40:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-uncased-massive-intent-classification_banking-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-massive-intent-classification_banking-1
This model is a fine-tuned version of [gokuls/bert-uncased-massive-intent-classification](https://huggingface.co/gokuls/bert-uncased-massive-intent-classification) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6770
- Accuracy: 0.1378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8977 | 1.0 | 3 | 2.7353 | 0.0622 |
| 2.5889 | 2.0 | 6 | 2.7109 | 0.0933 |
| 2.4362 | 3.0 | 9 | 2.6940 | 0.1111 |
| 2.3175 | 4.0 | 12 | 2.6817 | 0.1333 |
| 2.2524 | 5.0 | 15 | 2.6770 | 0.1378 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jfjensen/sd-class-butterflies-32
|
jfjensen
| 2022-11-28T12:59:41Z | 37 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T12:58:55Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(jfjensen/sd-class-butterflies-32)
image = pipeline().images[0]
image
```
|
minhtoan/t5-small-vietnamese-news
|
minhtoan
| 2022-11-28T12:52:14Z | 122 | 4 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"vi",
"dataset:Wikilingua",
"dataset:Vietnews",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-11-24T08:01:28Z |
---
language: vi
datasets:
- Wikilingua
- Vietnews
tags:
- summarization
license: mit
widget:
- text: 'VKS cáo buộc ông Nguyễn Thế Hiệp có sai phạm trong vụ cháy gần Bệnh viện Nhi trung ương khiến 2 người chết, thiệt hại 1,9 tỷ đồng song bị cáo khẳng định vô tội. Mức án đề nghị 9-10 năm tù với bị cáo 73 tuổi được đại diện VKSND quận Ba Đình đưa ra chiều 28/11, quy buộc phạm tội Vi phạm quy định về phòng cháy chữa cháy, theo Điều 313 Bộ luật Hình sự. VKS nhận định ông Hiệp có lỗi trong việc vận hành nhà trọ không phép, không đủ điều kiện an toàn phòng cháy chữa cháy, gây thiệt hại về tài sản và khiến hai người chết. Tuy nhiên, bị cáo chưa bồi thường. Bản luận tội nêu, tại phiên tòa hôm nay ông Hiệp "chưa tỏ thái độ ăn năn hối hận, có nhân thân đặc biệt xấu". Từ hàng chục năm trước, ông từng 11 lần bị lập danh chỉ bản về hành vi trộm cắp, năm 1985 lại nhận 18 năm tù về các tội cướp tài sản, hiếp dâm, đưa hối lộ...'
inference:
parameters:
max_length: 150
---
# Text summarization for Vietnamese Language
State-of-the-art lightweights pretrained Transformer-based encoder-decoder model for Vietnamese.
Model trained on dataset Vietnamese News with input length = 512, output length = 150
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# Example test data on VNExpress: https://vnexpress.net/ong-hiep-khung-khong-nhan-toi-trong-vu-chay-gan-benh-vien-nhi-4541483.html
tokenizer = AutoTokenizer.from_pretrained("minhtoan/t5-small-vietnamese-news")
model = AutoModelForSeq2SeqLM.from_pretrained("minhtoan/t5-small-vietnamese-news")
model.cuda()
src = 'VKS cáo buộc ông Nguyễn Thế Hiệp có sai phạm trong vụ cháy gần Bệnh viện Nhi trung ương khiến 2 người chết, thiệt hại 1,9 tỷ đồng song bị cáo khẳng định vô tội. Mức án đề nghị 9-10 năm tù với bị cáo 73 tuổi được đại diện VKSND quận Ba Đình đưa ra chiều 28/11, quy buộc phạm tội Vi phạm quy định về phòng cháy chữa cháy, theo Điều 313 Bộ luật Hình sự. VKS nhận định ông Hiệp có lỗi trong việc vận hành nhà trọ không phép, không đủ điều kiện an toàn phòng cháy chữa cháy, gây thiệt hại về tài sản và khiến hai người chết. Tuy nhiên, bị cáo chưa bồi thường. Bản luận tội nêu, tại phiên tòa hôm nay ông Hiệp "chưa tỏ thái độ ăn năn hối hận, có nhân thân đặc biệt xấu". Từ hàng chục năm trước, ông từng 11 lần bị lập danh chỉ bản về hành vi trộm cắp, năm 1985 lại nhận 18 năm tù về các tội cướp tài sản, hiếp dâm, đưa hối lộ...'
tokenized_text = tokenizer.encode(src, return_tensors="pt").cuda()
model.eval()
summary_ids = model.generate(tokenized_text, max_length=150)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
output
```
## Author
`
Phan Minh Toan
`
|
cardiffnlp/twitter-roberta-base-offensive
|
cardiffnlp
| 2022-11-28T11:36:23Z | 35,866 | 27 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"arxiv:2010.12421",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# Twitter-roBERTa-base for Offensive Language Identification
This is a roBERTa-base model trained on ~58M tweets and finetuned for offensive language identification with the TweetEval benchmark.
- Paper: [_TweetEval_ benchmark (Findings of EMNLP 2020)](https://arxiv.org/pdf/2010.12421.pdf).
- Git Repo: [Tweeteval official repository](https://github.com/cardiffnlp/tweeteval).
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='offensive'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) not-offensive 0.9073
2) offensive 0.0927
```
|
biu-nlp/f-coref
|
biu-nlp
| 2022-11-28T11:35:52Z | 88,201 | 18 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fast",
"coreference-resolution",
"en",
"dataset:multi_news",
"dataset:ontonotes",
"arxiv:2209.04280",
"arxiv:2205.12644",
"arxiv:1907.10529",
"arxiv:2101.00434",
"arxiv:2109.04127",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-08-19T12:01:10Z |
---
language:
- en
tags:
- fast
- coreference-resolution
license: mit
datasets:
- multi_news
- ontonotes
metrics:
- CoNLL
task_categories:
- coreference-resolution
model-index:
- name: biu-nlp/f-coref
results:
- task:
type: coreference-resolution
name: coreference-resolution
dataset:
name: ontonotes
type: coreference
metrics:
- name: Avg. F1
type: CoNLL
value: 78.5
---
## F-Coref: Fast, Accurate and Easy to Use Coreference Resolution
[F-Coref](https://arxiv.org/abs/2209.04280) allows to process 2.8K OntoNotes documents in 25 seconds on a V100 GPU (compared to 6 minutes for the [LingMess](https://arxiv.org/abs/2205.12644) model, and to 12 minutes of the popular AllenNLP coreference model) with only a modest drop in accuracy.
The fast speed is achieved through a combination of distillation of a compact model from the LingMess model, and an efficient batching implementation using a technique we call leftover
Please check the [official repository](https://github.com/shon-otmazgin/fastcoref) for more details and updates.
#### Experiments
| Model | Runtime | Memory |
|-----------------------|---------|---------|
| [Joshi et al. (2020)](https://arxiv.org/abs/1907.10529) | 12:06 | 27.4 |
| [Otmazgin et al. (2022)](https://arxiv.org/abs/2205.12644) | 06:43 | 4.6 |
| + Batching | 06:00 | 6.6 |
| [Kirstain et al. (2021)](https://arxiv.org/abs/2101.00434) | 04:37 | 4.4 |
| [Dobrovolskii (2021)](https://arxiv.org/abs/2109.04127) | 03:49 | 3.5 |
| [F-Coref](https://arxiv.org/abs/2209.04280) | 00:45 | 3.3 |
| + Batching | 00:35 | 4.5 |
| + Leftovers batching | 00:25 | 4.0 |
The inference time(Min:Sec) and memory(GiB) for each model on 2.8K documents. Average of 3 runs. Hardware, NVIDIA Tesla V100 SXM2.
### Citation
```
@inproceedings{Otmazgin2022FcorefFA,
title={F-coref: Fast, Accurate and Easy to Use Coreference Resolution},
author={Shon Otmazgin and Arie Cattan and Yoav Goldberg},
booktitle={AACL},
year={2022}
}
```
[F-coref: Fast, Accurate and Easy to Use Coreference Resolution](https://aclanthology.org/2022.aacl-demo.6) (Otmazgin et al., AACL-IJCNLP 2022)
|
clp/vit-base-patch16-224-finetuned
|
clp
| 2022-11-28T11:29:17Z | 186 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-28T11:19:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.3333333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7617
- Accuracy: 0.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6063 | 0.6667 |
| No log | 2.0 | 2 | 0.6958 | 0.3333 |
| No log | 3.0 | 3 | 0.7617 | 0.3333 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
eibakke/bert-finetuned-on-nq-short
|
eibakke
| 2022-11-28T10:41:08Z | 107 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-22T07:36:02Z |
Trained on the full NQ dataset.
|
mn367/radio-mlm
|
mn367
| 2022-11-28T09:52:57Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-28T09:42:20Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mn367/radio-mlm
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mn367/radio-mlm
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.6630
- Validation Loss: 4.6014
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 39000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.6630 | 4.6014 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
lewtun/sd-class-butterflies-32-test1
|
lewtun
| 2022-11-28T09:09:42Z | 36 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-11-28T08:47:06Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(lewtun/sd-class-butterflies-32-test1)
image = pipeline().images[0]
image
```
|
alexziweiwang/retrain_epoch2to5
|
alexziweiwang
| 2022-11-28T08:51:14Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-11-28T08:35:03Z |
---
tags:
- generated_from_trainer
model-index:
- name: retrain_epoch2to5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# retrain_epoch2to5
This model is a fine-tuned version of [alexziweiwang/retrain_first1epoch](https://huggingface.co/alexziweiwang/retrain_first1epoch) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3244
- Acc: 0.24
- Wer: 1.0
- Correct: 48
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:----:|:---:|:-------:|:-----:|:------:|
| No log | 0.02 | 5 | 7.8494 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.6032 | 0.04 | 10 | 7.4834 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.6032 | 0.06 | 15 | 7.1350 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.3336 | 0.08 | 20 | 6.8284 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.3336 | 0.11 | 25 | 6.5577 | 0.24 | 1.0 | 48 | 200 | 200 |
| 6.2911 | 0.13 | 30 | 6.3124 | 0.24 | 1.0 | 48 | 200 | 200 |
| 6.2911 | 0.15 | 35 | 6.0850 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.9181 | 0.17 | 40 | 5.8888 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.9181 | 0.19 | 45 | 5.6815 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.7954 | 0.21 | 50 | 5.4834 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.7954 | 0.23 | 55 | 5.3099 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.4801 | 0.25 | 60 | 5.1678 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.4801 | 0.27 | 65 | 5.0223 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.3377 | 0.3 | 70 | 4.8893 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.3377 | 0.32 | 75 | 4.7743 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.2511 | 0.34 | 80 | 4.6494 | 0.24 | 1.0 | 48 | 200 | 200 |
| 5.2511 | 0.36 | 85 | 4.5307 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.727 | 0.38 | 90 | 4.4237 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.727 | 0.4 | 95 | 4.3263 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.7653 | 0.42 | 100 | 4.2439 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.7653 | 0.44 | 105 | 4.1589 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.4971 | 0.46 | 110 | 4.0847 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.4971 | 0.48 | 115 | 4.0118 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.0077 | 0.51 | 120 | 3.9382 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.0077 | 0.53 | 125 | 3.8663 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.1693 | 0.55 | 130 | 3.8106 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.1693 | 0.57 | 135 | 3.7580 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.0854 | 0.59 | 140 | 3.7123 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.0854 | 0.61 | 145 | 3.6720 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.1988 | 0.63 | 150 | 3.6260 | 0.24 | 1.0 | 48 | 200 | 200 |
| 4.1988 | 0.65 | 155 | 3.5853 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.9975 | 0.67 | 160 | 3.5463 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.9975 | 0.7 | 165 | 3.5122 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.6042 | 0.72 | 170 | 3.4862 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.6042 | 0.74 | 175 | 3.4631 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.7347 | 0.76 | 180 | 3.4406 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.7347 | 0.78 | 185 | 3.4202 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.8336 | 0.8 | 190 | 3.4014 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.8336 | 0.82 | 195 | 3.3855 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.7454 | 0.84 | 200 | 3.3703 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.7454 | 0.86 | 205 | 3.3576 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.525 | 0.89 | 210 | 3.3471 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.525 | 0.91 | 215 | 3.3392 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.8175 | 0.93 | 220 | 3.3331 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.8175 | 0.95 | 225 | 3.3289 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.307 | 0.97 | 230 | 3.3259 | 0.24 | 1.0 | 48 | 200 | 200 |
| 3.307 | 0.99 | 235 | 3.3244 | 0.24 | 1.0 | 48 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
rahul77/t5-small-finetuned-rahul-rough
|
rahul77
| 2022-11-28T08:50:11Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-28T07:27:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-rahul-rough
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-rahul-rough
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 16 | 0.9994 | 26.1162 | 18.2666 | 23.7548 | 25.2106 | 19.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
alexziweiwang/retrain_first1epoch
|
alexziweiwang
| 2022-11-28T08:20:32Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2022-11-28T08:04:06Z |
---
tags:
- generated_from_trainer
model-index:
- name: retrain_first1epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# retrain_first1epoch
This model is a fine-tuned version of [alexziweiwang/exp21-uaspeech-foundation](https://huggingface.co/alexziweiwang/exp21-uaspeech-foundation) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.2238
- Acc: 0.24
- Wer: 1.0
- Correct: 48
- Total: 200
- Strlen: 200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc | Wer | Correct | Total | Strlen |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:------:|:-------:|:-----:|:------:|
| No log | 0.02 | 5 | 13.8048 | 0.22 | 1.0237 | 44 | 200 | 200 |
| 12.2209 | 0.04 | 10 | 13.6869 | 0.22 | 1.0257 | 44 | 200 | 200 |
| 12.2209 | 0.06 | 15 | 13.5691 | 0.225 | 1.0296 | 45 | 200 | 200 |
| 12.4299 | 0.08 | 20 | 13.4590 | 0.24 | 1.0375 | 48 | 200 | 200 |
| 12.4299 | 0.11 | 25 | 13.3508 | 0.235 | 1.0395 | 47 | 200 | 200 |
| 11.0298 | 0.13 | 30 | 13.2241 | 0.25 | 1.0375 | 50 | 200 | 200 |
| 11.0298 | 0.15 | 35 | 13.0757 | 0.245 | 1.0336 | 49 | 200 | 200 |
| 10.5248 | 0.17 | 40 | 12.9277 | 0.245 | 1.0316 | 49 | 200 | 200 |
| 10.5248 | 0.19 | 45 | 12.7784 | 0.25 | 1.0316 | 50 | 200 | 200 |
| 10.8585 | 0.21 | 50 | 12.6346 | 0.25 | 1.0277 | 50 | 200 | 200 |
| 10.8585 | 0.23 | 55 | 12.4939 | 0.25 | 1.0277 | 50 | 200 | 200 |
| 10.7046 | 0.25 | 60 | 12.3472 | 0.25 | 1.0257 | 50 | 200 | 200 |
| 10.7046 | 0.27 | 65 | 12.1962 | 0.25 | 1.0237 | 50 | 200 | 200 |
| 10.8031 | 0.3 | 70 | 12.0537 | 0.25 | 1.0257 | 50 | 200 | 200 |
| 10.8031 | 0.32 | 75 | 11.9088 | 0.25 | 1.0237 | 50 | 200 | 200 |
| 10.859 | 0.34 | 80 | 11.7693 | 0.25 | 1.0257 | 50 | 200 | 200 |
| 10.859 | 0.36 | 85 | 11.6214 | 0.25 | 1.0198 | 50 | 200 | 200 |
| 9.7886 | 0.38 | 90 | 11.4699 | 0.25 | 1.0178 | 50 | 200 | 200 |
| 9.7886 | 0.4 | 95 | 11.3182 | 0.25 | 1.0138 | 50 | 200 | 200 |
| 10.4627 | 0.42 | 100 | 11.1609 | 0.25 | 1.0119 | 50 | 200 | 200 |
| 10.4627 | 0.44 | 105 | 11.0017 | 0.25 | 1.0138 | 50 | 200 | 200 |
| 10.0619 | 0.46 | 110 | 10.8520 | 0.25 | 1.0138 | 50 | 200 | 200 |
| 10.0619 | 0.48 | 115 | 10.7096 | 0.25 | 1.0138 | 50 | 200 | 200 |
| 8.7443 | 0.51 | 120 | 10.5629 | 0.25 | 1.0138 | 50 | 200 | 200 |
| 8.7443 | 0.53 | 125 | 10.4111 | 0.25 | 1.0119 | 50 | 200 | 200 |
| 9.675 | 0.55 | 130 | 10.2606 | 0.25 | 1.0119 | 50 | 200 | 200 |
| 9.675 | 0.57 | 135 | 10.1125 | 0.245 | 1.0119 | 49 | 200 | 200 |
| 9.1918 | 0.59 | 140 | 9.9708 | 0.24 | 1.0040 | 48 | 200 | 200 |
| 9.1918 | 0.61 | 145 | 9.8248 | 0.24 | 1.0040 | 48 | 200 | 200 |
| 9.6798 | 0.63 | 150 | 9.6785 | 0.24 | 1.0040 | 48 | 200 | 200 |
| 9.6798 | 0.65 | 155 | 9.5309 | 0.24 | 1.0040 | 48 | 200 | 200 |
| 9.0181 | 0.67 | 160 | 9.3867 | 0.24 | 1.0040 | 48 | 200 | 200 |
| 9.0181 | 0.7 | 165 | 9.2432 | 0.24 | 1.0040 | 48 | 200 | 200 |
| 7.7446 | 0.72 | 170 | 9.1053 | 0.24 | 1.0040 | 48 | 200 | 200 |
| 7.7446 | 0.74 | 175 | 8.9743 | 0.24 | 1.0040 | 48 | 200 | 200 |
| 8.0251 | 0.76 | 180 | 8.8538 | 0.24 | 1.0040 | 48 | 200 | 200 |
| 8.0251 | 0.78 | 185 | 8.7473 | 0.24 | 1.0020 | 48 | 200 | 200 |
| 7.9652 | 0.8 | 190 | 8.6516 | 0.24 | 1.0020 | 48 | 200 | 200 |
| 7.9652 | 0.82 | 195 | 8.5661 | 0.24 | 1.0020 | 48 | 200 | 200 |
| 7.9537 | 0.84 | 200 | 8.4887 | 0.24 | 1.0020 | 48 | 200 | 200 |
| 7.9537 | 0.86 | 205 | 8.4206 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.2889 | 0.89 | 210 | 8.3644 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.2889 | 0.91 | 215 | 8.3169 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.8974 | 0.93 | 220 | 8.2789 | 0.24 | 1.0 | 48 | 200 | 200 |
| 7.8974 | 0.95 | 225 | 8.2514 | 0.24 | 1.0 | 48 | 200 | 200 |
| 6.9118 | 0.97 | 230 | 8.2330 | 0.24 | 1.0 | 48 | 200 | 200 |
| 6.9118 | 0.99 | 235 | 8.2238 | 0.24 | 1.0 | 48 | 200 | 200 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
huggingtweets/bobkerns
|
huggingtweets
| 2022-11-28T08:14:20Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-28T08:14:12Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/3653376550/f40f9602f2e8e185eb7ddce332157ffe_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bob (Moderna #5) Kerns</div>
<div style="text-align: center; font-size: 14px;">@bobkerns</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bob (Moderna #5) Kerns.
| Data | Bob (Moderna #5) Kerns |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 315 |
| Short tweets | 42 |
| Tweets kept | 2877 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/390ksfue/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bobkerns's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3me25qi0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3me25qi0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bobkerns')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
linfuyou/bert-squad-training
|
linfuyou
| 2022-11-28T07:41:14Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-15T09:15:55Z |
bert-base-cased-squadv1.1-training
|
mtz2110/wav2vec2-large-xls-r-300m-he
|
mtz2110
| 2022-11-28T07:33:52Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-27T16:52:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-he
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: fleurs
type: fleurs
config: he_il
split: train
args: he_il
metrics:
- name: Wer
type: wer
value: 0.5953778429933969
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-he
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.5954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.8899 | 0.99 | 200 | inf | 1.0 |
| 3.0802 | 1.98 | 400 | inf | 1.0 |
| 1.4275 | 2.97 | 600 | inf | 0.8155 |
| 0.8737 | 3.96 | 800 | inf | 0.7276 |
| 0.6503 | 4.95 | 1000 | inf | 0.6858 |
| 0.5176 | 5.94 | 1200 | inf | 0.6660 |
| 0.4084 | 6.93 | 1400 | inf | 0.6682 |
| 0.3469 | 7.92 | 1600 | inf | 0.6473 |
| 3.2485 | 6.67 | 1800 | inf | 1.0 |
| 0.6476 | 7.41 | 2000 | inf | 0.6574 |
| 0.3229 | 8.15 | 2200 | inf | 0.6499 |
| 0.2899 | 8.89 | 2400 | inf | 0.6376 |
| 0.26 | 9.63 | 2600 | inf | 0.6405 |
| 0.2038 | 10.37 | 2800 | inf | 0.6409 |
| 0.2158 | 11.11 | 3000 | inf | 0.6313 |
| 0.1892 | 11.85 | 3200 | inf | 0.6185 |
| 0.1611 | 12.59 | 3400 | inf | 0.6271 |
| 0.1584 | 13.33 | 3600 | inf | 0.6101 |
| 0.1443 | 14.07 | 3800 | inf | 0.6121 |
| 0.1353 | 14.81 | 4000 | inf | 0.6194 |
| 0.1109 | 15.56 | 4200 | inf | 0.6321 |
| 0.1116 | 16.3 | 4400 | inf | 0.6025 |
| 0.1054 | 17.04 | 4600 | inf | 0.6029 |
| 0.0966 | 17.78 | 4800 | inf | 0.6069 |
| 0.0824 | 18.52 | 5000 | inf | 0.5998 |
| 0.0812 | 19.26 | 5200 | inf | 0.5972 |
| 0.0749 | 20.0 | 5400 | inf | 0.5954 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
venetis/vit-base-patch16-224-in21k-finetuned-cifar10_album_vitVMMRdb_make_model_album_pred
|
venetis
| 2022-11-28T07:33:09Z | 186 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-27T16:45:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: vit-base-patch16-224-in21k-finetuned-cifar10_album_vitVMMRdb_make_model_album_pred
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cifar10_album_vitVMMRdb_make_model_album_pred
This model is a fine-tuned version of [aaraki/vit-base-patch16-224-in21k-finetuned-cifar10](https://huggingface.co/aaraki/vit-base-patch16-224-in21k-finetuned-cifar10) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5462
- Accuracy: 0.8594
- Precision: 0.8556
- Recall: 0.8594
- F1: 0.8544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 4.6112 | 1.0 | 839 | 4.5615 | 0.1425 | 0.0837 | 0.1425 | 0.0646 |
| 3.1177 | 2.0 | 1678 | 2.9595 | 0.4240 | 0.3424 | 0.4240 | 0.3283 |
| 2.0793 | 3.0 | 2517 | 2.0048 | 0.5771 | 0.5081 | 0.5771 | 0.5029 |
| 1.4566 | 4.0 | 3356 | 1.4554 | 0.6760 | 0.6333 | 0.6760 | 0.6280 |
| 1.1307 | 5.0 | 4195 | 1.1319 | 0.7350 | 0.7027 | 0.7350 | 0.7013 |
| 0.9367 | 6.0 | 5034 | 0.9328 | 0.7738 | 0.7546 | 0.7738 | 0.7503 |
| 0.7783 | 7.0 | 5873 | 0.8024 | 0.7986 | 0.7893 | 0.7986 | 0.7819 |
| 0.6022 | 8.0 | 6712 | 0.7187 | 0.8174 | 0.8098 | 0.8174 | 0.8055 |
| 0.5234 | 9.0 | 7551 | 0.6635 | 0.8313 | 0.8220 | 0.8313 | 0.8217 |
| 0.4298 | 10.0 | 8390 | 0.6182 | 0.8388 | 0.8337 | 0.8388 | 0.8302 |
| 0.3618 | 11.0 | 9229 | 0.5953 | 0.8455 | 0.8394 | 0.8455 | 0.8382 |
| 0.3262 | 12.0 | 10068 | 0.5735 | 0.8501 | 0.8443 | 0.8501 | 0.8436 |
| 0.3116 | 13.0 | 10907 | 0.5612 | 0.8527 | 0.8488 | 0.8527 | 0.8471 |
| 0.2416 | 14.0 | 11746 | 0.5524 | 0.8558 | 0.8500 | 0.8558 | 0.8496 |
| 0.2306 | 15.0 | 12585 | 0.5489 | 0.8572 | 0.8525 | 0.8572 | 0.8519 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
cavitcakir/swin-tiny-patch4-window7-224-finetuned-eurosat
|
cavitcakir
| 2022-11-28T04:30:00Z | 206 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-28T04:24:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5373
- Accuracy: 0.7639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6855 | 0.98 | 10 | 0.6436 | 0.625 |
| 0.6499 | 1.98 | 20 | 0.5745 | 0.7083 |
| 0.6021 | 2.98 | 30 | 0.5373 | 0.7639 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
scostiniano/roberta-tagalog-large-ner-v1
|
scostiniano
| 2022-11-28T04:22:41Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"doi:10.57967/hf/0201",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-24T06:32:21Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-tagalog-large-ner-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Description
- The dataset consists of 148 Filipino storytelling books, 4,523 sentences, 7,118 tokens, and 868 unique tokens.
- This NER model only supports the Filipino language and does not include proper nouns, verbs, adjectives, and adverbs as of the moment
- The input must undergo preprocessing. Soon I will upload the code to GitHub for preprocessing the input
- To replicate the preprocessed input use this example as a guide
- Input: "May umaapoy na bahay "
- Preprocessed Input: "apoy bahay"
# roberta-tagalog-large-ner-v1
This model is a fine-tuned version of [jcblaise/roberta-tagalog-large](https://huggingface.co/jcblaise/roberta-tagalog-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1866
- Precision: 0.9546
- Recall: 0.9557
- F1: 0.9551
- Accuracy: 0.9724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 205 | 0.2044 | 0.8945 | 0.8920 | 0.8933 | 0.9414 |
| No log | 2.0 | 410 | 0.1421 | 0.9410 | 0.9341 | 0.9375 | 0.9625 |
| 0.2423 | 3.0 | 615 | 0.1485 | 0.9309 | 0.9500 | 0.9403 | 0.9670 |
| 0.2423 | 4.0 | 820 | 0.1543 | 0.9473 | 0.9505 | 0.9489 | 0.9689 |
| 0.0154 | 5.0 | 1025 | 0.1749 | 0.9494 | 0.9494 | 0.9494 | 0.9706 |
| 0.0154 | 6.0 | 1230 | 0.1706 | 0.9459 | 0.9545 | 0.9502 | 0.9713 |
| 0.0154 | 7.0 | 1435 | 0.1822 | 0.9490 | 0.9522 | 0.9506 | 0.9717 |
| 0.003 | 8.0 | 1640 | 0.1841 | 0.9529 | 0.9540 | 0.9534 | 0.9723 |
| 0.003 | 9.0 | 1845 | 0.1870 | 0.9540 | 0.9551 | 0.9545 | 0.9729 |
| 0.0007 | 10.0 | 2050 | 0.1866 | 0.9546 | 0.9557 | 0.9551 | 0.9724 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ajdowney/bert-wash-binary-25
|
ajdowney
| 2022-11-28T03:44:46Z | 71 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-28T03:43:31Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ajdowney/bert-wash-binary-25
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ajdowney/bert-wash-binary-25
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3261
- Validation Loss: 0.6889
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 129, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4722 | 0.5900 | 0 |
| 0.3985 | 0.6213 | 1 |
| 0.3261 | 0.6889 | 2 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
thisisHJLee/wav2vec2-large-xls-r-1b-korean-sample2
|
thisisHJLee
| 2022-11-28T02:25:48Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-25T04:56:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-1b-korean-sample2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-korean-sample2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1283
- Cer: 0.0294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3415 | 1.0 | 11471 | 0.2666 | 0.0750 |
| 0.1997 | 2.0 | 22942 | 0.1617 | 0.0415 |
| 0.1153 | 3.0 | 34413 | 0.1283 | 0.0294 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.11.0
|
nawel-ucsb/wav2vec2-large-xls-r-300m-french-colab
|
nawel-ucsb
| 2022-11-28T02:13:41Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-24T05:54:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-french-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: fr
split: train[:4%]
args: fr
metrics:
- name: Wer
type: wer
value: 0.2073518915060671
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-french-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4075
- Wer: 0.2074
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.4642 | 1.07 | 400 | 1.4491 | 0.8681 |
| 0.9543 | 2.14 | 800 | 0.5998 | 0.4982 |
| 0.5364 | 3.21 | 1200 | 0.4400 | 0.3549 |
| 0.4236 | 4.28 | 1600 | 0.4348 | 0.3476 |
| 0.3345 | 5.35 | 2000 | 0.3897 | 0.3000 |
| 0.2938 | 6.42 | 2400 | 0.3893 | 0.3176 |
| 0.2502 | 7.49 | 2800 | 0.4306 | 0.3000 |
| 0.2376 | 8.56 | 3200 | 0.4023 | 0.2939 |
| 0.1999 | 9.63 | 3600 | 0.3973 | 0.2652 |
| 0.1859 | 10.7 | 4000 | 0.3701 | 0.2773 |
| 0.1673 | 11.76 | 4400 | 0.4047 | 0.2661 |
| 0.1555 | 12.83 | 4800 | 0.4207 | 0.2670 |
| 0.1385 | 13.9 | 5200 | 0.4110 | 0.2700 |
| 0.13 | 14.97 | 5600 | 0.4209 | 0.2575 |
| 0.1185 | 16.04 | 6000 | 0.4385 | 0.2582 |
| 0.11 | 17.11 | 6400 | 0.4334 | 0.2461 |
| 0.1016 | 18.18 | 6800 | 0.4058 | 0.2450 |
| 0.0913 | 19.25 | 7200 | 0.3923 | 0.2439 |
| 0.0843 | 20.32 | 7600 | 0.4139 | 0.2434 |
| 0.0782 | 21.39 | 8000 | 0.4111 | 0.2397 |
| 0.0732 | 22.46 | 8400 | 0.4116 | 0.2327 |
| 0.0644 | 23.53 | 8800 | 0.4041 | 0.2327 |
| 0.0603 | 24.6 | 9200 | 0.4065 | 0.2232 |
| 0.0553 | 25.67 | 9600 | 0.4198 | 0.2198 |
| 0.0502 | 26.74 | 10000 | 0.4137 | 0.2172 |
| 0.0472 | 27.81 | 10400 | 0.4084 | 0.2148 |
| 0.0455 | 28.88 | 10800 | 0.4116 | 0.2109 |
| 0.0417 | 29.95 | 11200 | 0.4075 | 0.2074 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
huggingtweets/tarunchitra
|
huggingtweets
| 2022-11-28T02:11:02Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-28T02:09:42Z |
---
language: en
thumbnail: http://www.huggingtweets.com/tarunchitra/1669601459083/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1587539091444432897/Z6_nmrCB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tarun Chitra</div>
<div style="text-align: center; font-size: 14px;">@tarunchitra</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tarun Chitra.
| Data | Tarun Chitra |
| --- | --- |
| Tweets downloaded | 3234 |
| Retweets | 439 |
| Short tweets | 362 |
| Tweets kept | 2433 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ex37piz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tarunchitra's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/12p1kbwc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/12p1kbwc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tarunchitra')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lewispons/large-email-classifier
|
lewispons
| 2022-11-28T01:56:52Z | 2 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-26T22:47:23Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{lewispons/large-email-classifier}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 752 with parameters:
```
{'batch_size': 50, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2256,
"warmup_steps": 226,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
fanpu/final_model_output_subreddit-wallstreetbets_3
|
fanpu
| 2022-11-28T01:42:49Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-27T19:02:43Z |
---
tags:
- generated_from_trainer
model-index:
- name: final_model_output_subreddit-wallstreetbets_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final_model_output_subreddit-wallstreetbets_3
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2588 | 1.25 | 5000 | 3.6824 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
erkanxyzalaca/turkishReviews-ds-mini
|
erkanxyzalaca
| 2022-11-28T01:38:07Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-27T22:00:36Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: turkishReviews-ds-mini
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkishReviews-ds-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.3867
- Validation Loss: 8.3741
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -765, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2149 | 9.6891 | 0 |
| 9.0695 | 8.7610 | 1 |
| 8.3867 | 8.3741 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.10.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
sedatkestepe/wav2vec2-large-xls-r-300m-turkish-colab
|
sedatkestepe
| 2022-11-28T00:32:12Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-28T00:32:12Z |
---
license: creativeml-openrail-m
---
|
ohrenn/lorepass
|
ohrenn
| 2022-11-28T00:28:39Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-11-28T00:28:39Z |
---
license: bigscience-bloom-rail-1.0
---
|
Tara2301/PPO-LunarLander-v22
|
Tara2301
| 2022-11-27T23:31:17Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-27T22:02:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.24 +/- 19.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sandipan1994/t5-small-entailement-Writer-T5-small
|
Sandipan1994
| 2022-11-27T22:16:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-27T21:11:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-entailement-Writer-T5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-entailement-Writer-T5-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 83 | 1.2943 |
| No log | 2.0 | 166 | 0.9323 |
| No log | 3.0 | 249 | 0.8443 |
| No log | 4.0 | 332 | 0.7884 |
| No log | 5.0 | 415 | 0.7582 |
| No log | 6.0 | 498 | 0.7355 |
| 1.2761 | 7.0 | 581 | 0.7178 |
| 1.2761 | 8.0 | 664 | 0.7105 |
| 1.2761 | 9.0 | 747 | 0.6972 |
| 1.2761 | 10.0 | 830 | 0.6847 |
| 1.2761 | 11.0 | 913 | 0.6774 |
| 1.2761 | 12.0 | 996 | 0.6708 |
| 0.7765 | 13.0 | 1079 | 0.6609 |
| 0.7765 | 14.0 | 1162 | 0.6566 |
| 0.7765 | 15.0 | 1245 | 0.6507 |
| 0.7765 | 16.0 | 1328 | 0.6454 |
| 0.7765 | 17.0 | 1411 | 0.6438 |
| 0.7765 | 18.0 | 1494 | 0.6384 |
| 0.693 | 19.0 | 1577 | 0.6347 |
| 0.693 | 20.0 | 1660 | 0.6321 |
| 0.693 | 21.0 | 1743 | 0.6254 |
| 0.693 | 22.0 | 1826 | 0.6237 |
| 0.693 | 23.0 | 1909 | 0.6215 |
| 0.693 | 24.0 | 1992 | 0.6167 |
| 0.6504 | 25.0 | 2075 | 0.6167 |
| 0.6504 | 26.0 | 2158 | 0.6131 |
| 0.6504 | 27.0 | 2241 | 0.6120 |
| 0.6504 | 28.0 | 2324 | 0.6091 |
| 0.6504 | 29.0 | 2407 | 0.6076 |
| 0.6504 | 30.0 | 2490 | 0.6058 |
| 0.615 | 31.0 | 2573 | 0.6031 |
| 0.615 | 32.0 | 2656 | 0.6015 |
| 0.615 | 33.0 | 2739 | 0.6015 |
| 0.615 | 34.0 | 2822 | 0.6000 |
| 0.615 | 35.0 | 2905 | 0.5998 |
| 0.615 | 36.0 | 2988 | 0.5969 |
| 0.586 | 37.0 | 3071 | 0.5959 |
| 0.586 | 38.0 | 3154 | 0.5941 |
| 0.586 | 39.0 | 3237 | 0.5923 |
| 0.586 | 40.0 | 3320 | 0.5936 |
| 0.586 | 41.0 | 3403 | 0.5929 |
| 0.586 | 42.0 | 3486 | 0.5922 |
| 0.5618 | 43.0 | 3569 | 0.5910 |
| 0.5618 | 44.0 | 3652 | 0.5885 |
| 0.5618 | 45.0 | 3735 | 0.5879 |
| 0.5618 | 46.0 | 3818 | 0.5873 |
| 0.5618 | 47.0 | 3901 | 0.5877 |
| 0.5618 | 48.0 | 3984 | 0.5878 |
| 0.5418 | 49.0 | 4067 | 0.5881 |
| 0.5418 | 50.0 | 4150 | 0.5858 |
| 0.5418 | 51.0 | 4233 | 0.5847 |
| 0.5418 | 52.0 | 4316 | 0.5839 |
| 0.5418 | 53.0 | 4399 | 0.5843 |
| 0.5418 | 54.0 | 4482 | 0.5826 |
| 0.5283 | 55.0 | 4565 | 0.5843 |
| 0.5283 | 56.0 | 4648 | 0.5833 |
| 0.5283 | 57.0 | 4731 | 0.5825 |
| 0.5283 | 58.0 | 4814 | 0.5827 |
| 0.5283 | 59.0 | 4897 | 0.5830 |
| 0.5283 | 60.0 | 4980 | 0.5806 |
| 0.5135 | 61.0 | 5063 | 0.5808 |
| 0.5135 | 62.0 | 5146 | 0.5806 |
| 0.5135 | 63.0 | 5229 | 0.5807 |
| 0.5135 | 64.0 | 5312 | 0.5823 |
| 0.5135 | 65.0 | 5395 | 0.5801 |
| 0.5135 | 66.0 | 5478 | 0.5799 |
| 0.5053 | 67.0 | 5561 | 0.5808 |
| 0.5053 | 68.0 | 5644 | 0.5796 |
| 0.5053 | 69.0 | 5727 | 0.5793 |
| 0.5053 | 70.0 | 5810 | 0.5785 |
| 0.5053 | 71.0 | 5893 | 0.5790 |
| 0.5053 | 72.0 | 5976 | 0.5775 |
| 0.4985 | 73.0 | 6059 | 0.5770 |
| 0.4985 | 74.0 | 6142 | 0.5777 |
| 0.4985 | 75.0 | 6225 | 0.5780 |
| 0.4985 | 76.0 | 6308 | 0.5779 |
| 0.4985 | 77.0 | 6391 | 0.5782 |
| 0.4985 | 78.0 | 6474 | 0.5773 |
| 0.4889 | 79.0 | 6557 | 0.5787 |
| 0.4889 | 80.0 | 6640 | 0.5787 |
| 0.4889 | 81.0 | 6723 | 0.5773 |
| 0.4889 | 82.0 | 6806 | 0.5777 |
| 0.4889 | 83.0 | 6889 | 0.5759 |
| 0.4889 | 84.0 | 6972 | 0.5765 |
| 0.4806 | 85.0 | 7055 | 0.5758 |
| 0.4806 | 86.0 | 7138 | 0.5760 |
| 0.4806 | 87.0 | 7221 | 0.5758 |
| 0.4806 | 88.0 | 7304 | 0.5760 |
| 0.4806 | 89.0 | 7387 | 0.5759 |
| 0.4806 | 90.0 | 7470 | 0.5758 |
| 0.4817 | 91.0 | 7553 | 0.5753 |
| 0.4817 | 92.0 | 7636 | 0.5757 |
| 0.4817 | 93.0 | 7719 | 0.5754 |
| 0.4817 | 94.0 | 7802 | 0.5750 |
| 0.4817 | 95.0 | 7885 | 0.5753 |
| 0.4817 | 96.0 | 7968 | 0.5752 |
| 0.4767 | 97.0 | 8051 | 0.5754 |
| 0.4767 | 98.0 | 8134 | 0.5756 |
| 0.4767 | 99.0 | 8217 | 0.5755 |
| 0.4767 | 100.0 | 8300 | 0.5755 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.