modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 18:26:50
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 18:26:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
HeySatyam/my_awesome_model
|
HeySatyam
| 2023-05-06T00:45:59Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T22:36:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93128
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2315
- Accuracy: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2348 | 1.0 | 1563 | 0.1867 | 0.9289 |
| 0.1523 | 2.0 | 3126 | 0.2315 | 0.9313 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
GraphicStylz/Stylz
|
GraphicStylz
| 2023-05-06T00:40:22Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"conversational",
"en",
"es",
"fr",
"sm",
"dataset:OpenAssistant/oasst1",
"dataset:wikipedia",
"dataset:google/fleurs",
"dataset:google/xtreme_s",
"dataset:google/wit",
"dataset:google_wellformed_query",
"dataset:cornell_movie_dialog",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
text-generation
| 2023-05-05T23:43:52Z |
---
license: cc-by-nc-sa-4.0
datasets:
- OpenAssistant/oasst1
- wikipedia
- google/fleurs
- google/xtreme_s
- google/wit
- google_wellformed_query
- cornell_movie_dialog
language:
- en
- es
- fr
- sm
metrics:
- character
- bertscore
library_name: adapter-transformers
pipeline_tag: conversational
---
|
Bainbridge/gpt2-kl_001_07_hscnspecial-hs_cn
|
Bainbridge
| 2023-05-06T00:40:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T23:44:28Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-kl_001_07_hscnspecial-hs_cn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-kl_001_07_hscnspecial-hs_cn
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 21
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 73.5264 | 0.02 | 10 | 69.5797 |
| 46.0831 | 0.04 | 20 | 32.9588 |
| 13.5437 | 0.06 | 30 | 10.6394 |
| 6.7905 | 0.08 | 40 | 4.1876 |
| 3.5385 | 0.1 | 50 | 2.0711 |
| 1.5427 | 0.12 | 60 | 1.1384 |
| 1.1141 | 0.14 | 70 | 0.8584 |
| 0.926 | 0.16 | 80 | 0.7673 |
| 0.7432 | 0.18 | 90 | 0.6549 |
| 0.7195 | 0.2 | 100 | 0.6203 |
| 0.6079 | 0.22 | 110 | 0.5966 |
| 0.6356 | 0.24 | 120 | 0.5855 |
| 0.6443 | 0.26 | 130 | 0.5837 |
| 0.6171 | 0.28 | 140 | 0.5791 |
| 0.6031 | 0.3 | 150 | 0.5869 |
| 0.587 | 0.32 | 160 | 0.5674 |
| 0.518 | 0.34 | 170 | 0.5665 |
| 0.6313 | 0.36 | 180 | 0.5698 |
| 0.6331 | 0.38 | 190 | 0.5624 |
| 0.5768 | 0.4 | 200 | 0.5575 |
| 0.6768 | 0.42 | 210 | 0.5587 |
| 0.6288 | 0.44 | 220 | 0.5500 |
| 0.5371 | 0.46 | 230 | 0.5510 |
| 0.5499 | 0.48 | 240 | 0.5578 |
| 0.548 | 0.5 | 250 | 0.5583 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.12.0a0+bd13bc6
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bainbridge/gpt2-kl_001_06_hscnspecial-hs_cn
|
Bainbridge
| 2023-05-05T23:43:42Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T22:42:39Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-kl_001_06_hscnspecial-hs_cn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-kl_001_06_hscnspecial-hs_cn
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 21
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 73.5243 | 0.02 | 10 | 69.5733 |
| 46.0785 | 0.04 | 20 | 32.9424 |
| 13.5445 | 0.06 | 30 | 10.6459 |
| 6.772 | 0.08 | 40 | 4.2120 |
| 3.5618 | 0.1 | 50 | 2.0539 |
| 1.4416 | 0.12 | 60 | 1.0919 |
| 1.0628 | 0.14 | 70 | 0.8710 |
| 0.8751 | 0.16 | 80 | 0.7180 |
| 0.7178 | 0.18 | 90 | 0.7504 |
| 0.7072 | 0.2 | 100 | 0.6374 |
| 0.5923 | 0.22 | 110 | 0.6286 |
| 0.6271 | 0.24 | 120 | 0.6355 |
| 0.6407 | 0.26 | 130 | 0.6217 |
| 0.6474 | 0.28 | 140 | 0.5978 |
| 0.5946 | 0.3 | 150 | 0.5985 |
| 0.6073 | 0.32 | 160 | 0.5887 |
| 0.5149 | 0.34 | 170 | 0.5773 |
| 0.6478 | 0.36 | 180 | 0.5671 |
| 0.6186 | 0.38 | 190 | 0.5593 |
| 0.5708 | 0.4 | 200 | 0.5572 |
| 0.6788 | 0.42 | 210 | 0.5545 |
| 0.6316 | 0.44 | 220 | 0.5503 |
| 0.5317 | 0.46 | 230 | 0.5508 |
| 0.5464 | 0.48 | 240 | 0.5475 |
| 0.548 | 0.5 | 250 | 0.5466 |
| 0.5488 | 0.52 | 260 | 0.5473 |
| 0.5551 | 0.54 | 270 | 0.5469 |
| 0.5867 | 0.56 | 280 | 0.5489 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.12.0a0+bd13bc6
- Datasets 2.12.0
- Tokenizers 0.13.3
|
reem1997/depressionDetectionArabert
|
reem1997
| 2023-05-05T23:34:23Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-04T10:37:37Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: depressionDetectionArabert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# depressionDetectionArabert
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02-twitter](https://huggingface.co/aubmindlab/bert-base-arabertv02-twitter) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4586
- Macro F1: 0.9279
- Precision: 0.9281
- Recall: 0.9279
- Kappa: 0.8558
- Accuracy: 0.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 128
- seed: 25
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Precision | Recall | Kappa | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 407 | 0.2285 | 0.9143 | 0.9181 | 0.9144 | 0.8289 | 0.9144 |
| 0.2459 | 2.0 | 815 | 0.2124 | 0.9264 | 0.9269 | 0.9264 | 0.8528 | 0.9264 |
| 0.1306 | 3.0 | 1222 | 0.2439 | 0.9285 | 0.9287 | 0.9285 | 0.8570 | 0.9285 |
| 0.0747 | 4.0 | 1630 | 0.3240 | 0.9270 | 0.9272 | 0.9270 | 0.8540 | 0.9270 |
| 0.0328 | 5.0 | 2037 | 0.3850 | 0.9273 | 0.9275 | 0.9273 | 0.8546 | 0.9273 |
| 0.0328 | 6.0 | 2445 | 0.4129 | 0.9279 | 0.9280 | 0.9279 | 0.8558 | 0.9279 |
| 0.0213 | 7.0 | 2852 | 0.4541 | 0.9273 | 0.9273 | 0.9273 | 0.8545 | 0.9273 |
| 0.0108 | 7.99 | 3256 | 0.4586 | 0.9279 | 0.9281 | 0.9279 | 0.8558 | 0.9279 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
ratish/DBERT_CleanDesc_Collision_v2.1.4
|
ratish
| 2023-05-05T23:28:07Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T23:24:37Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ratish/DBERT_CleanDesc_Collision_v2.1.4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ratish/DBERT_CleanDesc_Collision_v2.1.4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3438
- Validation Loss: 1.4467
- Train Accuracy: 0.5897
- Epoch: 11
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 4575, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.6148 | 1.7151 | 0.3077 | 0 |
| 1.4783 | 1.7263 | 0.3077 | 1 |
| 1.3926 | 1.6779 | 0.4103 | 2 |
| 1.2462 | 1.5778 | 0.4359 | 3 |
| 1.0592 | 1.5154 | 0.4359 | 4 |
| 0.8814 | 1.5370 | 0.4615 | 5 |
| 0.7554 | 1.4250 | 0.5385 | 6 |
| 0.6303 | 1.4385 | 0.5641 | 7 |
| 0.5458 | 1.3870 | 0.4872 | 8 |
| 0.4808 | 1.3459 | 0.5385 | 9 |
| 0.4098 | 1.5049 | 0.5385 | 10 |
| 0.3438 | 1.4467 | 0.5897 | 11 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ratish/DBERT_CleanDesc_Collision_v2.1.3
|
ratish
| 2023-05-05T23:22:32Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T23:18:12Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ratish/DBERT_CleanDesc_Collision_v2.1.3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ratish/DBERT_CleanDesc_Collision_v2.1.3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4790
- Validation Loss: 1.3887
- Train Accuracy: 0.6154
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3050, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.6006 | 1.7368 | 0.3077 | 0 |
| 1.4578 | 1.7548 | 0.3077 | 1 |
| 1.3664 | 1.6852 | 0.4103 | 2 |
| 1.1923 | 1.5441 | 0.4615 | 3 |
| 1.0295 | 1.4601 | 0.5385 | 4 |
| 0.8769 | 1.4388 | 0.5897 | 5 |
| 0.7829 | 1.4165 | 0.5897 | 6 |
| 0.6690 | 1.4263 | 0.5897 | 7 |
| 0.5765 | 1.3252 | 0.6154 | 8 |
| 0.4790 | 1.3887 | 0.6154 | 9 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ratish/DBERT_CleanDesc_Collision_v2.1.2
|
ratish
| 2023-05-05T23:16:03Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T23:15:13Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ratish/DBERT_CleanDesc_Collision_v2.1.2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ratish/DBERT_CleanDesc_Collision_v2.1.2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6317
- Validation Loss: 1.7171
- Train Accuracy: 0.3077
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3050, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.6317 | 1.7171 | 0.3077 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ratish/DBERT_CleanDesc_Collision_v2.1.1
|
ratish
| 2023-05-05T23:12:27Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T23:11:43Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ratish/DBERT_CleanDesc_Collision_v2.1.1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ratish/DBERT_CleanDesc_Collision_v2.1.1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4183
- Validation Loss: 1.1286
- Train Accuracy: 0.6923
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3050, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.4183 | 1.1286 | 0.6923 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bainbridge/gpt2-kl_001_05_hscnspecial-hs_cn
|
Bainbridge
| 2023-05-05T22:41:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T21:41:54Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-kl_001_05_hscnspecial-hs_cn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-kl_001_05_hscnspecial-hs_cn
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5513
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 21
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 73.5241 | 0.02 | 10 | 69.5822 |
| 46.0925 | 0.04 | 20 | 33.0099 |
| 13.5527 | 0.06 | 30 | 10.6424 |
| 6.8151 | 0.08 | 40 | 4.2013 |
| 3.5806 | 0.1 | 50 | 2.0893 |
| 1.4772 | 0.12 | 60 | 1.1276 |
| 1.1166 | 0.14 | 70 | 0.8410 |
| 0.8952 | 0.16 | 80 | 0.7324 |
| 0.7517 | 0.18 | 90 | 0.6448 |
| 0.7044 | 0.2 | 100 | 0.6761 |
| 0.6069 | 0.22 | 110 | 0.6436 |
| 0.6184 | 0.24 | 120 | 0.6057 |
| 0.6394 | 0.26 | 130 | 0.5877 |
| 0.6243 | 0.28 | 140 | 0.5719 |
| 0.598 | 0.3 | 150 | 0.5675 |
| 0.5848 | 0.32 | 160 | 0.5645 |
| 0.5161 | 0.34 | 170 | 0.5662 |
| 0.6247 | 0.36 | 180 | 0.5665 |
| 0.6243 | 0.38 | 190 | 0.5592 |
| 0.5768 | 0.4 | 200 | 0.5569 |
| 0.68 | 0.42 | 210 | 0.5583 |
| 0.627 | 0.44 | 220 | 0.5539 |
| 0.5369 | 0.46 | 230 | 0.5576 |
| 0.5449 | 0.48 | 240 | 0.5576 |
| 0.5456 | 0.5 | 250 | 0.5494 |
| 0.55 | 0.52 | 260 | 0.5559 |
| 0.5595 | 0.54 | 270 | 0.5525 |
| 0.5821 | 0.56 | 280 | 0.5513 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.12.0a0+bd13bc6
- Datasets 2.12.0
- Tokenizers 0.13.3
|
vinesmsuic/bg-visualnovel-v02
|
vinesmsuic
| 2023-05-05T22:21:14Z | 42 | 11 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-18T08:24:37Z |
---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# BG-VisualNovel-v02
A prototype project of generating Visual Novel backgrounds for game developers/artists. This model is intended to produce visual novel backgrounds with just a few prompts.
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "vinesmsuic/bg-visualnovel-v02"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a classroom"
image = pipe(prompt).images[0]
image.save("./classroom.png")
prompt = "a hospital building, two trees"
image = pipe(prompt).images[0]
image.save("./hospital.png")
prompt = "a street at night with nobody around"
image = pipe(prompt).images[0]
image.save("./nightstreet.png")
```
## Examples
a classroom

a hospital building, two trees

a street at night with nobody around

## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
tarek23/flan-t5-qg-test-LQ
|
tarek23
| 2023-05-05T22:17:04Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-05T15:44:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-qg-test-LQ
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-qg-test-LQ
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4480
- Rouge1: 22.9065
- Rouge2: 6.1197
- Rougel: 20.6542
- Rougelsum: 20.6524
- Gen Len: 16.0202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.472 | 1.0 | 23583 | 2.4480 | 22.9065 | 6.1197 | 20.6542 | 20.6524 | 16.0202 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
NightOcean/cat-blip-datasets
|
NightOcean
| 2023-05-05T22:08:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-05T03:50:18Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - NightOcean/cat-blip-datasets
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the ethers/cat-blip-datasets dataset. You can find some example images in the following.




|
WildPress/model2
|
WildPress
| 2023-05-05T22:05:09Z | 30 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-05T21:54:58Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of simbimbi cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - WildPress/model2
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of simbimbi cat using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Monero/Pygmalion-Metharme-7b-4bit-TopScore
|
Monero
| 2023-05-05T22:03:08Z | 1,355 | 45 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"text generation",
"instruct",
"en",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-04-30T03:04:16Z |
---
language:
- en
thumbnail: null
tags:
- text generation
- instruct
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Metharme 7B 4bit</h1>
<h2 style="text-align: center">An instruction-tuned LLaMA biased towards fiction writing and conversation.</h2>
## Model Details
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/metharme-7b
Quantized for KoboldAI use using https://github.com/0cc4m/GPTQ-for-LLaMa
I created several dozen quantized variations of this model and believe this variation to be "best." <br>
<html>
<head>
<style>
table {
border:1px solid #b3adad;
border-collapse:collapse;
padding:5px;
}
table th {
border:1px solid #b3adad;
padding:5px;
background: #f0f0f0;
color: #313030;
}
table td {
border:1px solid #b3adad;
text-align:center;
padding:5px;
background: #ffffff;
color: #313030;
}
</style>
</head>
<body>
<table>
<thead>
<tr>
<th>Model:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Metharme 7b - 16bit</td>
<td>5.7208476066589355</td>
<td>41.61103439331055</td>
<td>7.512405872344971</td>
</tr>
<tr>
<td>Metharme-AO-TS-Trits-damp0.1</td>
<td>6.172733783721924</td>
<td>45.19890213012695</td>
<td>7.872506141662598</td>
</tr>
</tbody>
</table>
</body>
</html>
<br><b>Other benchmark scores at the bottom of readme.</b>
<hr>
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
It was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions attached.
## Prompting
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
### Prompting example
Here's a concrete example. Let's say you have the following system prompt:
> This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.
And the user has typed in the following message:
> Start!
To perform inference against the model, the prompt should be formatted like this:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>
```
The model might generate something like:
> You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
>
> - Go inside the house.
> - Ignore the call and move away.
> - Run as fast as you can.
It will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
Pretend then that the user has replied with `go inside`. The next prompt should then be:
```
<|system|>This is a text adventure game. Describe the scenario to the user and give him three options to pick from on each turn.<|user|>Start!<|model|>You are standing in front of an old, abandoned house. The windows are boarded up, and there's no sign of life around it. As you approach, you notice a strange feeling emanating from within. Suddenly, you hear a voice calling out to you... 'Come inside!'
- Go inside the house.
- Ignore the call and move away.
- Run as fast as you can.<|user|>go inside<|model|>
```
Which might generate something like:
> You enter the house, and immediately, a wave of unsettling energy washes over you. Your senses seem heightened, but your mind feels foggy. A figure appears before you - a tall man with piercing eyes. He speaks to you in a language you don't understand.
>
> - Ask him what he wants.
> - Attack him.
> - Run away.
Same process applies. Usually, it is best to do a sliding window over the user and model turns, but keep the system prompt fixed at the start of the context window.
## Limitations and biases
The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
<hr>
<p><strong><font size="5">Click to Expand Benchmarks of different quantized variations</font></strong></p>
<strong><font size="4">The lower the number, the better the score.</font></strong>
<html>
<body>
<details>
<summary>Benchmarks Sorted by C4-New score</summary>
<table>
<thead>
<tr>
<th>GPTQ Variation:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Metharme-7b-16bit</td>
<td>5.7208476066589355</td>
<td>41.61103439331055</td>
<td>7.512405872344971</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits-damp0.1</td>
<td>6.172733783721924</td>
<td>45.19890213012695</td>
<td>7.872506141662598</td>
</tr>
<tr>
<td>Metharme-ao-trits-damp0.1</td>
<td>6.163661956787109</td>
<td>46.50249099731445</td>
<td>7.877425193786621</td>
</tr>
<tr>
<td>Metharme-ao-ts-damp0.1</td>
<td>6.184001445770264</td>
<td>46.17180633544922</td>
<td>7.880400657653809</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-damp0.1</td>
<td>6.220707893371582</td>
<td>47.82929611206055</td>
<td>7.884565353393555</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits</td>
<td>6.310682773590088</td>
<td>46.4483757019043</td>
<td>7.898126602172852</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao</td>
<td>6.281311511993408</td>
<td>46.79158401489258</td>
<td>7.906069755554199</td>
</tr>
<tr>
<td>Metharme-ao-trits</td>
<td>6.283935546875</td>
<td>46.57590103149414</td>
<td>7.907411575317383</td>
</tr>
<tr>
<td>Metharme-ao-ts</td>
<td>6.329496383666992</td>
<td>46.88129806518555</td>
<td>7.910323143005371</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits-damp0.1</td>
<td>6.232576370239258</td>
<td>48.081459045410156</td>
<td>7.95023250579834</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-damp0.1</td>
<td>6.210323333740234</td>
<td>47.66789245605469</td>
<td>7.952476978302002</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits-damp0.1</td>
<td>6.329384803771973</td>
<td>48.06882858276367</td>
<td>7.959168910980225</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits</td>
<td>6.471063137054443</td>
<td>49.650611877441406</td>
<td>7.969552040100098</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym</td>
<td>6.460526943206787</td>
<td>47.190460205078125</td>
<td>7.9732160568237305</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits</td>
<td>6.390106678009033</td>
<td>48.15375900268555</td>
<td>7.9804582595825195</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits</td>
<td>6.632943153381348</td>
<td>47.973228454589844</td>
<td>8.013848304748535</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-damp0.1</td>
<td>6.274552822113037</td>
<td>47.35737228393555</td>
<td>8.06270980834961</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits-damp0.1</td>
<td>6.266031265258789</td>
<td>47.346702575683594</td>
<td>8.068148612976074</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-damp0.1</td>
<td>6.107605934143066</td>
<td>47.91380310058594</td>
<td>8.068695068359375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits-damp0.1</td>
<td>6.128157138824463</td>
<td>48.04175567626953</td>
<td>8.0708646774292</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-damp0.1</td>
<td>6.219024658203125</td>
<td>45.834869384765625</td>
<td>8.071272850036621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits</td>
<td>7.017086029052734</td>
<td>45.04129409790039</td>
<td>8.074845314025879</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits-damp0.1</td>
<td>6.109438896179199</td>
<td>47.35737228393555</td>
<td>8.075060844421387</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits-damp0.1</td>
<td>6.118431568145752</td>
<td>45.67333221435547</td>
<td>8.077078819274902</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g</td>
<td>6.902080535888672</td>
<td>50.237754821777344</td>
<td>8.081602096557617</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g</td>
<td>6.424218654632568</td>
<td>48.48588943481445</td>
<td>8.089512825012207</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits</td>
<td>6.82415771484375</td>
<td>48.82029724121094</td>
<td>8.090987205505371</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-damp0.1</td>
<td>6.566899299621582</td>
<td>48.0670166015625</td>
<td>8.095841407775879</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits-damp0.1</td>
<td>6.289113521575928</td>
<td>46.06787109375</td>
<td>8.122251510620117</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym</td>
<td>6.518134117126465</td>
<td>49.66925811767578</td>
<td>8.13516616821289</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits</td>
<td>6.206963539123535</td>
<td>46.88833999633789</td>
<td>8.13610553741455</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-damp0.1</td>
<td>6.242006301879883</td>
<td>45.30938720703125</td>
<td>8.14249324798584</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym</td>
<td>6.387663841247559</td>
<td>48.07244110107422</td>
<td>8.173730850219727</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits-damp0.1</td>
<td>6.262309551239014</td>
<td>47.80055618286133</td>
<td>8.192194938659668</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g</td>
<td>10.206376075744629</td>
<td>49.00401306152344</td>
<td>8.198845863342285</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-damp0.1</td>
<td>6.17774772644043</td>
<td>46.47630310058594</td>
<td>8.20170783996582</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits-damp0.1</td>
<td>6.225503921508789</td>
<td>53.12746047973633</td>
<td>8.240595817565918</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits</td>
<td>8.68796443939209</td>
<td>49.73833465576172</td>
<td>8.2406587600708</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-damp0.1</td>
<td>6.584965705871582</td>
<td>55.20026397705078</td>
<td>8.268644332885742</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits</td>
<td>7.350858688354492</td>
<td>44.25314712524414</td>
<td>8.274221420288086</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits-damp0.1</td>
<td>6.585468769073486</td>
<td>51.55869674682617</td>
<td>8.2803316116333</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits</td>
<td>6.756448745727539</td>
<td>51.510311126708984</td>
<td>8.292160987854004</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-damp0.1</td>
<td>6.379064083099365</td>
<td>52.17233657836914</td>
<td>8.316649436950684</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits</td>
<td>7.056288242340088</td>
<td>48.983768463134766</td>
<td>8.339276313781738</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g</td>
<td>9.475017547607422</td>
<td>52.358829498291016</td>
<td>8.340700149536133</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym</td>
<td>6.9575653076171875</td>
<td>49.356834411621094</td>
<td>8.35644817352295</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym</td>
<td>6.819341659545898</td>
<td>55.28740310668945</td>
<td>8.377721786499023</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-damp0.1</td>
<td>6.7783050537109375</td>
<td>51.81301498413086</td>
<td>8.621373176574707</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits-damp0.1</td>
<td>6.631694793701172</td>
<td>51.7371711730957</td>
<td>8.656966209411621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-damp0.1</td>
<td>6.495014190673828</td>
<td>49.39763641357422</td>
<td>8.68167781829834</td>
</tr>
<tr>
<td>Metharme-sym-damp0.1</td>
<td>6.896804332733154</td>
<td>57.4250602722168</td>
<td>8.703770637512207</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym</td>
<td>7.270263671875</td>
<td>54.35262680053711</td>
<td>8.787986755371094</td>
</tr>
<tr>
<td>Metharme-7b-4bit-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-damp0.1</td>
<td>6.7517595291137695</td>
<td>54.06147384643555</td>
<td>8.821818351745605</td>
</tr>
<tr>
<td>Metharme-7b-4bit-alone</td>
<td>6.997134685516357</td>
<td>58.87525177001953</td>
<td>8.824191093444824</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits</td>
<td>7.2306809425354</td>
<td>66.78710174560547</td>
<td>8.879831314086914</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits-damp0.1</td>
<td>6.886506080627441</td>
<td>64.72743225097656</td>
<td>8.880627632141113</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts</td>
<td>7.735969543457031</td>
<td>62.92238235473633</td>
<td>8.913650512695312</td>
</tr>
<tr>
<td>Metharme-sym-trits-damp0.1</td>
<td>7.075908184051514</td>
<td>59.13897705078125</td>
<td>8.919178009033203</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits</td>
<td>7.599876403808594</td>
<td>55.75454330444336</td>
<td>8.932201385498047</td>
</tr>
<tr>
<td>Metharme-sym-trits</td>
<td>7.494253635406494</td>
<td>63.320709228515625</td>
<td>8.969240188598633</td>
</tr>
<tr>
<td>Metharme-sym</td>
<td>7.585672855377197</td>
<td>61.01168441772461</td>
<td>9.032520294189453</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-128g</td>
<td>251321.265625</td>
<td>250117.859375</td>
<td>232929.234375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-32g</td>
<td>275425.5</td>
<td>267733.25</td>
<td>254506.71875</td>
</tr>
</tbody>
</table>
</details>
<details>
<summary>Benchmarks Sorted by Wikitext2</summary>
<table>
<thead>
<tr>
<th>GPTQ Variation:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Metharme-7b-16bit</td>
<td>5.7208476066589355</td>
<td>41.61103439331055</td>
<td>7.512405872344971</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-damp0.1</td>
<td>6.107605934143066</td>
<td>47.91380310058594</td>
<td>8.068695068359375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits-damp0.1</td>
<td>6.109438896179199</td>
<td>47.35737228393555</td>
<td>8.075060844421387</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits-damp0.1</td>
<td>6.118431568145752</td>
<td>45.67333221435547</td>
<td>8.077078819274902</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits-damp0.1</td>
<td>6.128157138824463</td>
<td>48.04175567626953</td>
<td>8.0708646774292</td>
</tr>
<tr>
<td>Metharme-ao-trits-damp0.1</td>
<td>6.163661956787109</td>
<td>46.50249099731445</td>
<td>7.877425193786621</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits-damp0.1</td>
<td>6.172733783721924</td>
<td>45.19890213012695</td>
<td>7.872506141662598</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-damp0.1</td>
<td>6.17774772644043</td>
<td>46.47630310058594</td>
<td>8.20170783996582</td>
</tr>
<tr>
<td>Metharme-ao-ts-damp0.1</td>
<td>6.184001445770264</td>
<td>46.17180633544922</td>
<td>7.880400657653809</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits</td>
<td>6.206963539123535</td>
<td>46.88833999633789</td>
<td>8.13610553741455</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-damp0.1</td>
<td>6.210323333740234</td>
<td>47.66789245605469</td>
<td>7.952476978302002</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-damp0.1</td>
<td>6.219024658203125</td>
<td>45.834869384765625</td>
<td>8.071272850036621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-damp0.1</td>
<td>6.220707893371582</td>
<td>47.82929611206055</td>
<td>7.884565353393555</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits-damp0.1</td>
<td>6.225503921508789</td>
<td>53.12746047973633</td>
<td>8.240595817565918</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits-damp0.1</td>
<td>6.232576370239258</td>
<td>48.081459045410156</td>
<td>7.95023250579834</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-damp0.1</td>
<td>6.242006301879883</td>
<td>45.30938720703125</td>
<td>8.14249324798584</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits-damp0.1</td>
<td>6.262309551239014</td>
<td>47.80055618286133</td>
<td>8.192194938659668</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits-damp0.1</td>
<td>6.266031265258789</td>
<td>47.346702575683594</td>
<td>8.068148612976074</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-damp0.1</td>
<td>6.274552822113037</td>
<td>47.35737228393555</td>
<td>8.06270980834961</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao</td>
<td>6.281311511993408</td>
<td>46.79158401489258</td>
<td>7.906069755554199</td>
</tr>
<tr>
<td>Metharme-ao-trits</td>
<td>6.283935546875</td>
<td>46.57590103149414</td>
<td>7.907411575317383</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits-damp0.1</td>
<td>6.289113521575928</td>
<td>46.06787109375</td>
<td>8.122251510620117</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits</td>
<td>6.310682773590088</td>
<td>46.4483757019043</td>
<td>7.898126602172852</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits-damp0.1</td>
<td>6.329384803771973</td>
<td>48.06882858276367</td>
<td>7.959168910980225</td>
</tr>
<tr>
<td>Metharme-ao-ts</td>
<td>6.329496383666992</td>
<td>46.88129806518555</td>
<td>7.910323143005371</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-damp0.1</td>
<td>6.379064083099365</td>
<td>52.17233657836914</td>
<td>8.316649436950684</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym</td>
<td>6.387663841247559</td>
<td>48.07244110107422</td>
<td>8.173730850219727</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits</td>
<td>6.390106678009033</td>
<td>48.15375900268555</td>
<td>7.9804582595825195</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g</td>
<td>6.424218654632568</td>
<td>48.48588943481445</td>
<td>8.089512825012207</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym</td>
<td>6.460526943206787</td>
<td>47.190460205078125</td>
<td>7.9732160568237305</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits</td>
<td>6.471063137054443</td>
<td>49.650611877441406</td>
<td>7.969552040100098</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-7b-4bit-damp0.1</td>
<td>6.495014190673828</td>
<td>49.39763641357422</td>
<td>8.68167781829834</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym</td>
<td>6.518134117126465</td>
<td>49.66925811767578</td>
<td>8.13516616821289</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-damp0.1</td>
<td>6.566899299621582</td>
<td>48.0670166015625</td>
<td>8.095841407775879</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-damp0.1</td>
<td>6.584965705871582</td>
<td>55.20026397705078</td>
<td>8.268644332885742</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits-damp0.1</td>
<td>6.585468769073486</td>
<td>51.55869674682617</td>
<td>8.2803316116333</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits-damp0.1</td>
<td>6.631694793701172</td>
<td>51.7371711730957</td>
<td>8.656966209411621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits</td>
<td>6.632943153381348</td>
<td>47.973228454589844</td>
<td>8.013848304748535</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-damp0.1</td>
<td>6.7517595291137695</td>
<td>54.06147384643555</td>
<td>8.821818351745605</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits</td>
<td>6.756448745727539</td>
<td>51.510311126708984</td>
<td>8.292160987854004</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-damp0.1</td>
<td>6.7783050537109375</td>
<td>51.81301498413086</td>
<td>8.621373176574707</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym</td>
<td>6.819341659545898</td>
<td>55.28740310668945</td>
<td>8.377721786499023</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits</td>
<td>6.82415771484375</td>
<td>48.82029724121094</td>
<td>8.090987205505371</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits-damp0.1</td>
<td>6.886506080627441</td>
<td>64.72743225097656</td>
<td>8.880627632141113</td>
</tr>
<tr>
<td>Metharme-sym-damp0.1</td>
<td>6.896804332733154</td>
<td>57.4250602722168</td>
<td>8.703770637512207</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g</td>
<td>6.902080535888672</td>
<td>50.237754821777344</td>
<td>8.081602096557617</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym</td>
<td>6.9575653076171875</td>
<td>49.356834411621094</td>
<td>8.35644817352295</td>
</tr>
<tr>
<td>Metharme-7b-4bit-alone</td>
<td>6.997134685516357</td>
<td>58.87525177001953</td>
<td>8.824191093444824</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits</td>
<td>7.017086029052734</td>
<td>45.04129409790039</td>
<td>8.074845314025879</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits</td>
<td>7.056288242340088</td>
<td>48.983768463134766</td>
<td>8.339276313781738</td>
</tr>
<tr>
<td>Metharme-sym-trits-damp0.1</td>
<td>7.075908184051514</td>
<td>59.13897705078125</td>
<td>8.919178009033203</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits</td>
<td>7.2306809425354</td>
<td>66.78710174560547</td>
<td>8.879831314086914</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym</td>
<td>7.270263671875</td>
<td>54.35262680053711</td>
<td>8.787986755371094</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits</td>
<td>7.350858688354492</td>
<td>44.25314712524414</td>
<td>8.274221420288086</td>
</tr>
<tr>
<td>Metharme-sym-trits</td>
<td>7.494253635406494</td>
<td>63.320709228515625</td>
<td>8.969240188598633</td>
</tr>
<tr>
<td>Metharme-sym</td>
<td>7.585672855377197</td>
<td>61.01168441772461</td>
<td>9.032520294189453</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits</td>
<td>7.599876403808594</td>
<td>55.75454330444336</td>
<td>8.932201385498047</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts</td>
<td>7.735969543457031</td>
<td>62.92238235473633</td>
<td>8.913650512695312</td>
</tr>
<tr>
<td>Metharme-7b-4bit-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits</td>
<td>8.68796443939209</td>
<td>49.73833465576172</td>
<td>8.2406587600708</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g</td>
<td>9.475017547607422</td>
<td>52.358829498291016</td>
<td>8.340700149536133</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g</td>
<td>10.206376075744629</td>
<td>49.00401306152344</td>
<td>8.198845863342285</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-128g</td>
<td>251321.265625</td>
<td>250117.859375</td>
<td>232929.234375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-32g</td>
<td>275425.5</td>
<td>267733.25</td>
<td>254506.71875</td>
</tr>
</tbody>
</table>
</details>
<details>
<summary>Benchmarks Sorted by PTB-new Score</summary>
<table>
<thead>
<tr>
<th>GPTQ Variation:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Metharme-7b-16bit</td>
<td>5.7208476066589355</td>
<td>41.61103439331055</td>
<td>7.512405872344971</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits</td>
<td>7.350858688354492</td>
<td>44.25314712524414</td>
<td>8.274221420288086</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits</td>
<td>7.017086029052734</td>
<td>45.04129409790039</td>
<td>8.074845314025879</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits-damp0.1</td>
<td>6.172733783721924</td>
<td>45.19890213012695</td>
<td>7.872506141662598</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-damp0.1</td>
<td>6.242006301879883</td>
<td>45.30938720703125</td>
<td>8.14249324798584</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits-damp0.1</td>
<td>6.118431568145752</td>
<td>45.67333221435547</td>
<td>8.077078819274902</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-damp0.1</td>
<td>6.219024658203125</td>
<td>45.834869384765625</td>
<td>8.071272850036621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits-damp0.1</td>
<td>6.289113521575928</td>
<td>46.06787109375</td>
<td>8.122251510620117</td>
</tr>
<tr>
<td>Metharme-ao-ts-damp0.1</td>
<td>6.184001445770264</td>
<td>46.17180633544922</td>
<td>7.880400657653809</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits</td>
<td>6.310682773590088</td>
<td>46.4483757019043</td>
<td>7.898126602172852</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-damp0.1</td>
<td>6.17774772644043</td>
<td>46.47630310058594</td>
<td>8.20170783996582</td>
</tr>
<tr>
<td>Metharme-ao-trits-damp0.1</td>
<td>6.163661956787109</td>
<td>46.50249099731445</td>
<td>7.877425193786621</td>
</tr>
<tr>
<td>Metharme-ao-trits</td>
<td>6.283935546875</td>
<td>46.57590103149414</td>
<td>7.907411575317383</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao</td>
<td>6.281311511993408</td>
<td>46.79158401489258</td>
<td>7.906069755554199</td>
</tr>
<tr>
<td>Metharme-ao-ts</td>
<td>6.329496383666992</td>
<td>46.88129806518555</td>
<td>7.910323143005371</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits</td>
<td>6.206963539123535</td>
<td>46.88833999633789</td>
<td>8.13610553741455</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym</td>
<td>6.460526943206787</td>
<td>47.190460205078125</td>
<td>7.9732160568237305</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits-damp0.1</td>
<td>6.266031265258789</td>
<td>47.346702575683594</td>
<td>8.068148612976074</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-damp0.1</td>
<td>6.274552822113037</td>
<td>47.35737228393555</td>
<td>8.06270980834961</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits-damp0.1</td>
<td>6.109438896179199</td>
<td>47.35737228393555</td>
<td>8.075060844421387</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-damp0.1</td>
<td>6.210323333740234</td>
<td>47.66789245605469</td>
<td>7.952476978302002</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits-damp0.1</td>
<td>6.262309551239014</td>
<td>47.80055618286133</td>
<td>8.192194938659668</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-damp0.1</td>
<td>6.220707893371582</td>
<td>47.82929611206055</td>
<td>7.884565353393555</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-damp0.1</td>
<td>6.107605934143066</td>
<td>47.91380310058594</td>
<td>8.068695068359375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits</td>
<td>6.632943153381348</td>
<td>47.973228454589844</td>
<td>8.013848304748535</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits-damp0.1</td>
<td>6.128157138824463</td>
<td>48.04175567626953</td>
<td>8.0708646774292</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-damp0.1</td>
<td>6.566899299621582</td>
<td>48.0670166015625</td>
<td>8.095841407775879</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits-damp0.1</td>
<td>6.329384803771973</td>
<td>48.06882858276367</td>
<td>7.959168910980225</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym</td>
<td>6.387663841247559</td>
<td>48.07244110107422</td>
<td>8.173730850219727</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits-damp0.1</td>
<td>6.232576370239258</td>
<td>48.081459045410156</td>
<td>7.95023250579834</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits</td>
<td>6.390106678009033</td>
<td>48.15375900268555</td>
<td>7.9804582595825195</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g</td>
<td>6.424218654632568</td>
<td>48.48588943481445</td>
<td>8.089512825012207</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits</td>
<td>6.82415771484375</td>
<td>48.82029724121094</td>
<td>8.090987205505371</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits</td>
<td>7.056288242340088</td>
<td>48.983768463134766</td>
<td>8.339276313781738</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g</td>
<td>10.206376075744629</td>
<td>49.00401306152344</td>
<td>8.198845863342285</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym</td>
<td>6.9575653076171875</td>
<td>49.356834411621094</td>
<td>8.35644817352295</td>
</tr>
<tr>
<td>Metharme-7b-4bit-damp0.1</td>
<td>6.495014190673828</td>
<td>49.39763641357422</td>
<td>8.68167781829834</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits</td>
<td>6.471063137054443</td>
<td>49.650611877441406</td>
<td>7.969552040100098</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym</td>
<td>6.518134117126465</td>
<td>49.66925811767578</td>
<td>8.13516616821289</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits</td>
<td>8.68796443939209</td>
<td>49.73833465576172</td>
<td>8.2406587600708</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g</td>
<td>6.902080535888672</td>
<td>50.237754821777344</td>
<td>8.081602096557617</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits</td>
<td>6.756448745727539</td>
<td>51.510311126708984</td>
<td>8.292160987854004</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits-damp0.1</td>
<td>6.585468769073486</td>
<td>51.55869674682617</td>
<td>8.2803316116333</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits-damp0.1</td>
<td>6.631694793701172</td>
<td>51.7371711730957</td>
<td>8.656966209411621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-damp0.1</td>
<td>6.7783050537109375</td>
<td>51.81301498413086</td>
<td>8.621373176574707</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-damp0.1</td>
<td>6.379064083099365</td>
<td>52.17233657836914</td>
<td>8.316649436950684</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g</td>
<td>9.475017547607422</td>
<td>52.358829498291016</td>
<td>8.340700149536133</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits-damp0.1</td>
<td>6.225503921508789</td>
<td>53.12746047973633</td>
<td>8.240595817565918</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-damp0.1</td>
<td>6.7517595291137695</td>
<td>54.06147384643555</td>
<td>8.821818351745605</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym</td>
<td>7.270263671875</td>
<td>54.35262680053711</td>
<td>8.787986755371094</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-damp0.1</td>
<td>6.584965705871582</td>
<td>55.20026397705078</td>
<td>8.268644332885742</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym</td>
<td>6.819341659545898</td>
<td>55.28740310668945</td>
<td>8.377721786499023</td>
</tr>
<tr>
<td>Metharme-7b-4bit-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits</td>
<td>7.599876403808594</td>
<td>55.75454330444336</td>
<td>8.932201385498047</td>
</tr>
<tr>
<td>Metharme-sym-damp0.1</td>
<td>6.896804332733154</td>
<td>57.4250602722168</td>
<td>8.703770637512207</td>
</tr>
<tr>
<td>Metharme-7b-4bit-alone</td>
<td>6.997134685516357</td>
<td>58.87525177001953</td>
<td>8.824191093444824</td>
</tr>
<tr>
<td>Metharme-sym-trits-damp0.1</td>
<td>7.075908184051514</td>
<td>59.13897705078125</td>
<td>8.919178009033203</td>
</tr>
<tr>
<td>Metharme-sym</td>
<td>7.585672855377197</td>
<td>61.01168441772461</td>
<td>9.032520294189453</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts</td>
<td>7.735969543457031</td>
<td>62.92238235473633</td>
<td>8.913650512695312</td>
</tr>
<tr>
<td>Metharme-sym-trits</td>
<td>7.494253635406494</td>
<td>63.320709228515625</td>
<td>8.969240188598633</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits-damp0.1</td>
<td>6.886506080627441</td>
<td>64.72743225097656</td>
<td>8.880627632141113</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits</td>
<td>7.2306809425354</td>
<td>66.78710174560547</td>
<td>8.879831314086914</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-128g</td>
<td>251321.265625</td>
<td>250117.859375</td>
<td>232929.234375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-32g</td>
<td>275425.5</td>
<td>267733.25</td>
<td>254506.71875</td>
</tr>
</tbody>
</table>
</details>
<details>
<summary>Benchmarks Sorted in Alphabetical Order</summary>
<table>
<thead>
<tr>
<th>GPTQ Variation:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Metharme-7b-16bit</td>
<td>5.7208476066589355</td>
<td>41.61103439331055</td>
<td>7.512405872344971</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-damp0.1</td>
<td>6.242006301879883</td>
<td>45.30938720703125</td>
<td>8.14249324798584</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-damp0.1</td>
<td>6.379064083099365</td>
<td>52.17233657836914</td>
<td>8.316649436950684</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits-damp0.1</td>
<td>6.585468769073486</td>
<td>51.55869674682617</td>
<td>8.2803316116333</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits-damp0.1</td>
<td>6.262309551239014</td>
<td>47.80055618286133</td>
<td>8.192194938659668</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g</td>
<td>10.206376075744629</td>
<td>49.00401306152344</td>
<td>8.198845863342285</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-damp0.1</td>
<td>6.107605934143066</td>
<td>47.91380310058594</td>
<td>8.068695068359375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-damp0.1</td>
<td>6.566899299621582</td>
<td>48.0670166015625</td>
<td>8.095841407775879</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits-damp0.1</td>
<td>6.109438896179199</td>
<td>47.35737228393555</td>
<td>8.075060844421387</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits-damp0.1</td>
<td>6.128157138824463</td>
<td>48.04175567626953</td>
<td>8.0708646774292</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g</td>
<td>6.902080535888672</td>
<td>50.237754821777344</td>
<td>8.081602096557617</td>
</tr>
<tr>
<td>Metharme-7b-4bit-alone</td>
<td>6.997134685516357</td>
<td>58.87525177001953</td>
<td>8.824191093444824</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-128g</td>
<td>251321.265625</td>
<td>250117.859375</td>
<td>232929.234375</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-32g</td>
<td>275425.5</td>
<td>267733.25</td>
<td>254506.71875</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-damp0.1</td>
<td>6.220707893371582</td>
<td>47.82929611206055</td>
<td>7.884565353393555</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ao</td>
<td>6.281311511993408</td>
<td>46.79158401489258</td>
<td>7.906069755554199</td>
</tr>
<tr>
<td>Metharme-7b-4bit-damp0.1</td>
<td>6.495014190673828</td>
<td>49.39763641357422</td>
<td>8.68167781829834</td>
</tr>
<tr>
<td>Metharme-7b-4bit-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-damp0.1</td>
<td>6.17774772644043</td>
<td>46.47630310058594</td>
<td>8.20170783996582</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-damp0.1</td>
<td>6.584965705871582</td>
<td>55.20026397705078</td>
<td>8.268644332885742</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits-damp0.1</td>
<td>6.225503921508789</td>
<td>53.12746047973633</td>
<td>8.240595817565918</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits-damp0.1</td>
<td>6.289113521575928</td>
<td>46.06787109375</td>
<td>8.122251510620117</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g</td>
<td>9.475017547607422</td>
<td>52.358829498291016</td>
<td>8.340700149536133</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-damp0.1</td>
<td>6.219024658203125</td>
<td>45.834869384765625</td>
<td>8.071272850036621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-damp0.1</td>
<td>6.274552822113037</td>
<td>47.35737228393555</td>
<td>8.06270980834961</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits-damp0.1</td>
<td>6.266031265258789</td>
<td>47.346702575683594</td>
<td>8.068148612976074</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits-damp0.1</td>
<td>6.118431568145752</td>
<td>45.67333221435547</td>
<td>8.077078819274902</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g</td>
<td>6.424218654632568</td>
<td>48.48588943481445</td>
<td>8.089512825012207</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-damp0.1</td>
<td>6.7783050537109375</td>
<td>51.81301498413086</td>
<td>8.621373176574707</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-damp0.1</td>
<td>6.7517595291137695</td>
<td>54.06147384643555</td>
<td>8.821818351745605</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits-damp0.1</td>
<td>6.886506080627441</td>
<td>64.72743225097656</td>
<td>8.880627632141113</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits-damp0.1</td>
<td>6.631694793701172</td>
<td>51.7371711730957</td>
<td>8.656966209411621</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts</td>
<td>7.735969543457031</td>
<td>62.92238235473633</td>
<td>8.913650512695312</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym-trits</td>
<td>7.056288242340088</td>
<td>48.983768463134766</td>
<td>8.339276313781738</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-sym</td>
<td>6.9575653076171875</td>
<td>49.356834411621094</td>
<td>8.35644817352295</td>
</tr>
<tr>
<td>Metharme-7b-4bit-128g-trits</td>
<td>8.68796443939209</td>
<td>49.73833465576172</td>
<td>8.2406587600708</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym-trits</td>
<td>6.206963539123535</td>
<td>46.88833999633789</td>
<td>8.13610553741455</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-sym</td>
<td>6.518134117126465</td>
<td>49.66925811767578</td>
<td>8.13516616821289</td>
</tr>
<tr>
<td>Metharme-7b-4bit-32g-trits</td>
<td>7.017086029052734</td>
<td>45.04129409790039</td>
<td>8.074845314025879</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym-trits</td>
<td>6.756448745727539</td>
<td>51.510311126708984</td>
<td>8.292160987854004</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-sym</td>
<td>6.819341659545898</td>
<td>55.28740310668945</td>
<td>8.377721786499023</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-128g-trits</td>
<td>7.350858688354492</td>
<td>44.25314712524414</td>
<td>8.274221420288086</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym-trits</td>
<td>6.82415771484375</td>
<td>48.82029724121094</td>
<td>8.090987205505371</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-sym</td>
<td>6.387663841247559</td>
<td>48.07244110107422</td>
<td>8.173730850219727</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-32g-trits</td>
<td>6.632943153381348</td>
<td>47.973228454589844</td>
<td>8.013848304748535</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym-trits</td>
<td>7.599876403808594</td>
<td>55.75454330444336</td>
<td>8.932201385498047</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-sym</td>
<td>7.270263671875</td>
<td>54.35262680053711</td>
<td>8.787986755371094</td>
</tr>
<tr>
<td>Metharme-7b-4bit-ts-trits</td>
<td>7.2306809425354</td>
<td>66.78710174560547</td>
<td>8.879831314086914</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits-damp0.1</td>
<td>6.329384803771973</td>
<td>48.06882858276367</td>
<td>7.959168910980225</td>
</tr>
<tr>
<td>Metharme-ao-sym-trits</td>
<td>6.390106678009033</td>
<td>48.15375900268555</td>
<td>7.9804582595825195</td>
</tr>
<tr>
<td>Metharme-ao-sym</td>
<td>6.477842807769775</td>
<td>48.53507614135742</td>
<td>7.993765354156494</td>
</tr>
<tr>
<td>Metharme-ao-trits-damp0.1</td>
<td>6.163661956787109</td>
<td>46.50249099731445</td>
<td>7.877425193786621</td>
</tr>
<tr>
<td>Metharme-ao-trits</td>
<td>6.283935546875</td>
<td>46.57590103149414</td>
<td>7.907411575317383</td>
</tr>
<tr>
<td>Metharme-ao-ts-damp0.1</td>
<td>6.184001445770264</td>
<td>46.17180633544922</td>
<td>7.880400657653809</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-damp0.1</td>
<td>6.210323333740234</td>
<td>47.66789245605469</td>
<td>7.952476978302002</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits-damp0.1</td>
<td>6.232576370239258</td>
<td>48.081459045410156</td>
<td>7.95023250579834</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym-trits</td>
<td>6.471063137054443</td>
<td>49.650611877441406</td>
<td>7.969552040100098</td>
</tr>
<tr>
<td>Metharme-ao-ts-sym</td>
<td>6.460526943206787</td>
<td>47.190460205078125</td>
<td>7.9732160568237305</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits-damp0.1</td>
<td>6.172733783721924</td>
<td>45.19890213012695</td>
<td>7.872506141662598</td>
</tr>
<tr>
<td>Metharme-ao-ts-trits</td>
<td>6.310682773590088</td>
<td>46.4483757019043</td>
<td>7.898126602172852</td>
</tr>
<tr>
<td>Metharme-ao-ts</td>
<td>6.329496383666992</td>
<td>46.88129806518555</td>
<td>7.910323143005371</td>
</tr>
<tr>
<td>Metharme-sym-damp0.1</td>
<td>6.896804332733154</td>
<td>57.4250602722168</td>
<td>8.703770637512207</td>
</tr>
<tr>
<td>Metharme-sym-trits-damp0.1</td>
<td>7.075908184051514</td>
<td>59.13897705078125</td>
<td>8.919178009033203</td>
</tr>
<tr>
<td>Metharme-sym-trits</td>
<td>7.494253635406494</td>
<td>63.320709228515625</td>
<td>8.969240188598633</td>
</tr>
<tr>
<td>Metharme-sym</td>
<td>7.585672855377197</td>
<td>61.01168441772461</td>
<td>9.032520294189453</td>
</tr>
<tr>
<td>Metharme-trits</td>
<td>7.832409858703613</td>
<td>55.383026123046875</td>
<td>8.806737899780273</td>
</tr>
</tbody>
</table>
</details>
</body>
</html>
|
keminglu/pivoine-7b
|
keminglu
| 2023-05-05T21:59:20Z | 24 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-04T21:53:46Z |
---
license: bigscience-bloom-rail-1.0
---
# Tokenizer used for all BLOOM models
Tokenizer information are provided at [https://huggingface.co/bigscience/bloom#preprocessing](https://huggingface.co/bigscience/bloom#preprocessing)
TODO: point to paper once it comes out with extra details on the tokenizer
|
wavymulder/portraitplus
|
wavymulder
| 2023-05-05T21:59:07Z | 257,281 | 297 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-23T16:04:26Z |
---
language:
- en
thumbnail: "https://huggingface.co/wavymulder/portraitplus/resolve/main/imgs/page1.jpg"
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- safetensors
- diffusers
inference: true
---
**Portrait+**

[*CKPT DOWNLOAD LINK*](https://huggingface.co/wavymulder/portraitplus/resolve/main/portrait%2B1.0.ckpt) - this is a dreambooth model trained on a diverse set of close to medium range portraits of people.
Use `portrait+ style` in your prompt (I recommend at the start)
The goal was to create a model with a consistent portrait composition and consistent eyes. See the batch example below for the consistency of the model's eyes. This model can do several styles, so you'll want to guide it along depending on your goals. Note below in the document that prompting celebrities works a bit differently than prompting generic characters, since real people have a more photoreal presence in the base 1.5 model. Also note that fantasy concepts, like cyberpunk people or wizards, will require more rigid prompting for photoreal styles than something common like a person in a park.
Portrait+ works best at a 1:1 aspect ratio, though I've had success with tall aspect ratios as well.
Please see [this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images above.](https://huggingface.co/wavymulder/portraitplus/resolve/main/parameters_for_samples.txt)
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run portraitplus:
[](https://huggingface.co/spaces/wavymulder/portraitplus)


|
chribeiro/a2c-PandaReachDense-v2
|
chribeiro
| 2023-05-05T21:58:33Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-05T21:53:27Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.34 +/- 0.18
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lckidwell/album-cover-style
|
lckidwell
| 2023-05-05T21:51:46Z | 41 | 5 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-13T20:13:37Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Album-Cover-Style Dreambooth model
> trained by lckidwell with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Trained on ~80 album covers, mostly from the 50s and 60s, a mix of Jazz, pop, polka, religious, children's and other genres.
## Sample Prompts:
* Kanye plays jazz, albumcover style
* Swingin' with Henry Kissinger, albumcover style
* Jay Z Children's album, albumcover style
* Polka Party with Machine Gun Kelly, albumcover style
## Sample pictures of this concept:














## Moar Samples





|
MirageML/lowpoly-cyberpunk
|
MirageML
| 2023-05-05T21:32:43Z | 40 | 30 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-28T07:50:09Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
# Low Poly Cyberpunk on Stable Diffusion via Dreambooth
This the Stable Diffusion model fine-tuned the Low Poly Cyberpunk concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of lowpoly_cyberpunk**
# Run on [Mirage](https://app.mirageml.com)
Run this model and explore text-to-3D on [Mirage](https://app.mirageml.com)!
Here are is a sample output for this model:

# Share your Results and Reach us on [Discord](https://discord.gg/9B2Pu2bEvj)!
[](https://discord.gg/9B2Pu2bEvj)
[Image Source](https://www.behance.net/search/images?similarStyleImagesId=847895439)
|
MirageML/lowpoly-environment
|
MirageML
| 2023-05-05T21:32:29Z | 36 | 8 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-28T08:25:58Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
# Low Poly Environment on Stable Diffusion via Dreambooth
This the Stable Diffusion model fine-tuned the Low Poly Environment concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of lowpoly_environment**
# Run on [Mirage](https://app.mirageml.com)
Run this model and explore text-to-3D on [Mirage](https://app.mirageml.com)!
Here are is a sample output for this model:

# Share your Results and Reach us on [Discord](https://discord.gg/9B2Pu2bEvj)!
[](https://discord.gg/9B2Pu2bEvj)
[Image Source](https://www.behance.net/gallery/76095417/Game-Environments?tracking_source=search_projects%7Cisometric+building)
|
mHossain/bangla-para-v1-410000
|
mHossain
| 2023-05-05T21:19:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-05T20:20:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bangla-para-v1-410000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla-para-v1-410000
This model is a fine-tuned version of [mHossain/bangla-para-v1-380000](https://huggingface.co/mHossain/bangla-para-v1-380000) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9209
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 18.2867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.1627 | 1.0 | 3375 | 0.9209 | 0.0 | 0.0 | 0.0 | 0.0 | 18.2867 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
HelloSecurity/BootLeggerAI
|
HelloSecurity
| 2023-05-05T21:19:12Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-05-05T21:13:11Z |
---
license: mit
---
# BootLeggerAI
A python based chat bot for coders
BootLegger AI is a Python script that uses deep learning, neural networking, and natural language processing to create a chatbot that can help users with their programming needs. The chatbot can understand natural language and generate appropriate responses based on the user's input.
(A → E, E → I, I → O, O → U, U → A)
## Instructions for Use
1. Install the necessary libraries: Tensorflow and Numpy. You can do this using pip or any package manager of your choice.
2. Copy the code in the BootLeggerAI.py file into your Python editor of choice.
3. Run the code in your Python environment.
4. The chatbot will start running and prompt the user to enter their request.
5. Enter your request, and the chatbot will generate a response based on the predicted output.
6. If you encounter any issues, please see the Troubleshooting section below.
## Detailed instructions
1. Install Required Libraries: Before running the script, make sure you have installed all the required libraries. This script requires `tensorflow` and `numpy`. You can install them via pip or conda:
```
pip install tensorflow numpy
```
2. Prepare Input and Output Data: The script takes in an array of input data and an array of output data. Each element of the input array should be a string that represents a programming task you want help with. The output array should contain the corresponding output for each input task.
For example, you can create a numpy array for the input and output data like this:
```
import numpy as np
# Define input and output data
input_data = np.array(['create a python script', 'build a program', 'generate a code'])
output_data = np.array([['create', 'python', 'script'], ['build', 'program'], ['generate', 'code']])
```
3. Tokenize Input Data: To use the input data with the model, we need to tokenize it first. Tokenization is the process of converting text into numerical values. The function `tokenize_input` in the script takes in the input data and returns the tokenizer object, the tokenized input sequence, the maximum length of the input sequence, and the vocabulary size.
You can tokenize the input data like this:
```
from bootlegger_ai import tokenize_input
tokenizer, input_seq, max_len, vocab_size = tokenize_input(input_data)
```
4. Define the Neural Network Model: The next step is to define the neural network model. The function `define_model` in the script takes in the vocabulary size and maximum length of the input sequence and returns the model object.
You can define the model like this:
```
from bootlegger_ai import define_model
model = define_model(vocab_size, max_len)
```
5. Train the Neural Network Model: After defining the model, we need to train it with the input and output data. The function `train_model` in the script takes in the model object, input sequence, output data, and number of epochs to train the model. It returns the trained model object.
You can train the model like this:
```
from bootlegger_ai import train_model
model = train_model(model, input_seq, output_data)
```
6. Test the Model: After training the model, we can test it on new input data. The function `test_model` in the script takes in the model object, test data, tokenizer object, and maximum length of the input sequence. It returns the predictions for the test data.
You can test the model like this:
```
from bootlegger_ai import test_model
test_data = np.array(['I want to create a new website'])
predictions = test_model(model, test_data, tokenizer, max_len)
```
7. Generate Response: Finally, we can generate a response based on the predicted output. The function `generate_response` in the script takes in the predictions and tokenizer object and returns a response string.
You can generate a response like this:
```
from bootlegger_ai import generate_response
response = generate_response(predictions, tokenizer)
print(response)
```
And that's it! By following these steps, you can use the BootLegger AI script to generate responses to programming-related requests.
## Developed By
This script was developed by Adam Rivers and Hello Security LLC.
## Troubleshooting
If the chatbot is not generating appropriate responses, please ensure that the input data is relevant to the context of programming.
Additionally, you can try retraining the neural network model by modifying the input and output data in the script.
If you encounter any other issues, please feel free to reach out for assistance.
|
Hentai00/aurakasih
|
Hentai00
| 2023-05-05T20:59:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-05T20:50:29Z |
---
license: creativeml-openrail-m
---
|
radames/instruct-pix2pix-img2img
|
radames
| 2023-05-05T20:59:08Z | 12 | 19 |
diffusers
|
[
"diffusers",
"safetensors",
"image-to-image",
"license:mit",
"diffusers:StableDiffusionInstructPix2PixPipeline",
"region:us"
] |
image-to-image
| 2023-05-05T20:58:32Z |
---
license: mit
duplicated_from: timbrooks/instruct-pix2pix
library_name: diffusers
tags:
- image-to-image
---
# InstructPix2Pix: Learning to Follow Image Editing Instructions
GitHub: https://github.com/timothybrooks/instruct-pix2pix
<img src='https://instruct-pix2pix.timothybrooks.com/teaser.jpg'/>
## Example
To use `InstructPix2Pix`, install `diffusers` using `main` for now. The pipeline will be available in the next release
```bash
pip install diffusers accelerate safetensors transformers
```
```python
import PIL
import requests
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler
model_id = "timbrooks/instruct-pix2pix"
pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None)
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
url = "https://raw.githubusercontent.com/timothybrooks/instruct-pix2pix/main/imgs/example.jpg"
def download_image(url):
image = PIL.Image.open(requests.get(url, stream=True).raw)
image = PIL.ImageOps.exif_transpose(image)
image = image.convert("RGB")
return image
image = download_image(URL)
prompt = "turn him into cyborg"
images = pipe(prompt, image=image, num_inference_steps=10, image_guidance_scale=1).images
images[0]
```
|
MirageML/lowpoly-game-building
|
MirageML
| 2023-05-05T20:53:17Z | 27 | 16 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-28T08:52:37Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
# Low Poly Game Building on Stable Diffusion via Dreambooth
This the Stable Diffusion model fine-tuned the Low Poly Game Building concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of lowpoly_game_building**
# Run on [Mirage](https://app.mirageml.com)
Run this model and explore text-to-3D on [Mirage](https://app.mirageml.com)!
Here are is a sample output for this model:

# Share your Results and Reach us on [Discord](https://discord.gg/9B2Pu2bEvj)!
[](https://discord.gg/9B2Pu2bEvj)
[Image Source](https://www.behance.net/guutv)
|
MirageML/lowpoly-landscape
|
MirageML
| 2023-05-05T20:53:06Z | 11 | 5 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-28T08:57:57Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
# Low Poly Landscape on Stable Diffusion via Dreambooth
This the Stable Diffusion model fine-tuned the Low Poly Landscape concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of lowpoly_landscape**
# Run on [Mirage](https://app.mirageml.com)
Run this model and explore text-to-3D on [Mirage](https://app.mirageml.com)!
Here are is a sample output for this model:

# Share your Results and Reach us on [Discord](https://discord.gg/9B2Pu2bEvj)!
[](https://discord.gg/9B2Pu2bEvj)
[Image Source](https://www.deviantart.com/kautsar211086/art/Long-Time-No-See-457162094)
|
MirageML/lowpoly-world
|
MirageML
| 2023-05-05T20:52:48Z | 50 | 84 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-28T09:20:48Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
# Low Poly World on Stable Diffusion via Dreambooth
This the Stable Diffusion model fine-tuned the Low Poly World concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of lowpoly_world**
# Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run lowpoly-world:
[](https://huggingface.co/spaces/MirageML/lowpoly-world)
# Run on [Mirage](https://app.mirageml.com)
Run this model and explore text-to-3D on [Mirage](https://app.mirageml.com)!
Here are is a sample output for this model:

# Share your Results and Reach us on [Discord](https://discord.gg/9B2Pu2bEvj)!
[](https://discord.gg/9B2Pu2bEvj)
[Image Source](https://www.behance.net/gallery/89934319/Low-Poly-Worlds)
|
Ahmedrai/Starfire
|
Ahmedrai
| 2023-05-05T20:52:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-05T20:06:32Z |
---
license: creativeml-openrail-m
---
|
MirageML/fantasy-scene
|
MirageML
| 2023-05-05T20:52:36Z | 39 | 20 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-28T09:25:54Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
# Fantasy Scene on Stable Diffusion via Dreambooth
This the Stable Diffusion model fine-tuned the Fantasy Scene concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of fantasy_scene**
# Run on [Mirage](https://app.mirageml.com)
Run this model and explore text-to-3D on [Mirage](https://app.mirageml.com)!
Here are is a sample output for this model:

# Share your Results and Reach us on [Discord](https://discord.gg/9B2Pu2bEvj)!
[](https://discord.gg/9B2Pu2bEvj)
[Image Source #1](https://www.behance.net/search/images?tracking_source=typeahead_search_direct&similarStyleImagesId=647409143)
[Image Source #2](https://www.behance.net/romanklco)
|
MirageML/fantasy-sword
|
MirageML
| 2023-05-05T20:52:28Z | 16 | 13 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-28T09:26:54Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
# Fantasy Sword on Stable Diffusion via Dreambooth
This the Stable Diffusion model fine-tuned the Fantasy Sword concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of fantasy_sword**
# Run on [Mirage](https://app.mirageml.com)
Run this model and explore text-to-3D on [Mirage](https://app.mirageml.com)!
Here are is a sample output for this model:

# Share your Results and Reach us on [Discord](https://discord.gg/9B2Pu2bEvj)!
[](https://discord.gg/9B2Pu2bEvj)
[Image Source]( https://www.artstation.com/search?sort_by=relevance&query=fantasy%20sword)
|
amankishore/hpchapter
|
amankishore
| 2023-05-05T20:52:16Z | 34 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-08T00:01:38Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
jdmartinev/CREMA_D_Model
|
jdmartinev
| 2023-05-05T20:50:56Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-05-03T14:45:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: CREMA_D_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CREMA_D_Model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8221
- Accuracy: 0.7322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7381 | 0.99 | 37 | 1.6700 | 0.3359 |
| 1.4143 | 1.99 | 74 | 1.4013 | 0.4878 |
| 1.1738 | 2.98 | 111 | 1.1820 | 0.6029 |
| 1.0229 | 4.0 | 149 | 1.0244 | 0.6532 |
| 0.8688 | 4.99 | 186 | 0.9101 | 0.7036 |
| 0.7578 | 5.99 | 223 | 0.8787 | 0.7112 |
| 0.705 | 6.98 | 260 | 0.8292 | 0.7229 |
| 0.6469 | 8.0 | 298 | 0.8509 | 0.7179 |
| 0.5684 | 8.99 | 335 | 0.8412 | 0.7288 |
| 0.5611 | 9.93 | 370 | 0.8221 | 0.7322 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
bilginn/bert-base-uncased-finetuned-cola
|
bilginn
| 2023-05-05T20:49:47Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-03T19:32:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5678267214677118
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5922
- Matthews Correlation: 0.5678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.207256119784435e-06
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.5811 | 1.0 | 2138 | 0.6179 | 0.4846 |
| 0.4698 | 2.0 | 4276 | 0.8083 | 0.5495 |
| 0.3161 | 3.0 | 6414 | 1.1152 | 0.5389 |
| 0.2499 | 4.0 | 8552 | 1.0719 | 0.5624 |
| 0.1755 | 5.0 | 10690 | 1.1734 | 0.5709 |
| 0.1511 | 6.0 | 12828 | 1.2383 | 0.5699 |
| 0.0738 | 7.0 | 14966 | 1.3802 | 0.5598 |
| 0.0677 | 8.0 | 17104 | 1.4711 | 0.5599 |
| 0.0509 | 9.0 | 19242 | 1.5751 | 0.5678 |
| 0.0397 | 10.0 | 21380 | 1.5922 | 0.5678 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
marcoliu/sd-pokemon-model-lora
|
marcoliu
| 2023-05-05T20:33:57Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-05T13:29:07Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - marcoliu/sd-pokemon-model-lora
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
seena18/tier3_satellite_image_classification
|
seena18
| 2023-05-05T20:21:35Z | 218 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-05T20:00:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: tier3_satellite_image_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tier3_satellite_image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4309
- Accuracy: 0.8085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1878 | 1.0 | 125 | 2.0795 | 0.6905 |
| 1.5347 | 2.0 | 250 | 1.5727 | 0.776 |
| 1.3524 | 3.0 | 375 | 1.4309 | 0.8085 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jeremyvictor/mt5-large-gecid-e8-b8
|
jeremyvictor
| 2023-05-05T20:05:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-05T17:31:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-large-gecid-e8-b8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-large-gecid-e8-b8
This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/google/mt5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3000
- Rouge1: 64.4729
- Rouge2: 57.8072
- Rougel: 64.3868
- Rougelsum: 64.3569
- Gen Len: 18.7495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.8319 | 0.57 | 500 | 0.4310 | 61.7619 | 53.4157 | 61.6684 | 61.6628 | 18.7567 |
| 0.4258 | 1.13 | 1000 | 0.3541 | 62.8056 | 55.1747 | 62.7392 | 62.7231 | 18.7601 |
| 0.2777 | 1.7 | 1500 | 0.3231 | 63.4739 | 56.1433 | 63.366 | 63.3544 | 18.7546 |
| 0.2023 | 2.26 | 2000 | 0.3068 | 64.1314 | 57.5343 | 64.0453 | 64.024 | 18.7546 |
| 0.1432 | 2.83 | 2500 | 0.3000 | 64.4729 | 57.8072 | 64.3868 | 64.3569 | 18.7495 |
| 0.0976 | 3.39 | 3000 | 0.3257 | 64.7215 | 58.3266 | 64.6223 | 64.5957 | 18.7601 |
| 0.0811 | 3.96 | 3500 | 0.3112 | 64.7518 | 58.4888 | 64.6487 | 64.6454 | 18.7648 |
| 0.0472 | 4.52 | 4000 | 0.3389 | 64.9658 | 58.822 | 64.8741 | 64.8621 | 18.7592 |
| 0.0413 | 5.09 | 4500 | 0.3557 | 64.9468 | 58.8286 | 64.8609 | 64.8501 | 18.7592 |
| 0.0248 | 5.66 | 5000 | 0.3452 | 65.2004 | 59.2566 | 65.0876 | 65.0889 | 18.7605 |
| 0.0195 | 6.22 | 5500 | 0.3719 | 65.1043 | 59.083 | 65.0369 | 65.026 | 18.7541 |
| 0.013 | 6.79 | 6000 | 0.3947 | 65.3124 | 59.486 | 65.2434 | 65.2324 | 18.7571 |
| 0.0084 | 7.35 | 6500 | 0.4056 | 65.4053 | 59.6589 | 65.3249 | 65.3115 | 18.7580 |
| 0.0055 | 7.92 | 7000 | 0.4216 | 65.3303 | 59.5344 | 65.2475 | 65.2284 | 18.7567 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.11.0a0+b6df043
- Datasets 2.12.0
- Tokenizers 0.13.2
|
bazyl/gtsrb-model
|
bazyl
| 2023-05-05T20:05:05Z | 298 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:gtsrb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-07-02T10:39:06Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- gtsrb
metrics:
- accuracy
model-index:
- name: gtsrb-model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: bazyl/GTSRB
type: gtsrb
args: gtsrb
metrics:
- name: Accuracy
type: accuracy
value: 0.9993199591975519
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gtsrb-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the bazyl/GTSRB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0034
- Accuracy: 0.9993
## Model description
The German Traffic Sign Benchmark is a multi-class, single-image classification challenge held at the International Joint Conference on Neural Networks (IJCNN) 2011. We cordially invite researchers from relevant fields to participate: The competition is designed to allow for participation without special domain knowledge. Our benchmark has the following properties:
- Single-image, multi-class classification problem
- More than 40 classes
- More than 50,000 images in total
- Large, lifelike database
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2593 | 1.0 | 4166 | 0.1585 | 0.9697 |
| 0.2659 | 2.0 | 8332 | 0.0472 | 0.9900 |
| 0.2825 | 3.0 | 12498 | 0.0155 | 0.9971 |
| 0.0953 | 4.0 | 16664 | 0.0113 | 0.9983 |
| 0.1277 | 5.0 | 20830 | 0.0076 | 0.9985 |
| 0.0816 | 6.0 | 24996 | 0.0047 | 0.9988 |
| 0.0382 | 7.0 | 29162 | 0.0041 | 0.9990 |
| 0.0983 | 8.0 | 33328 | 0.0059 | 0.9990 |
| 0.1746 | 9.0 | 37494 | 0.0034 | 0.9993 |
| 0.1153 | 10.0 | 41660 | 0.0038 | 0.9990 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gdecarlo/fabric_test
|
gdecarlo
| 2023-05-05T19:38:50Z | 243 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-05T19:27:00Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: fabric_test
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.3731343150138855
---
# fabric_test
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### fabric linen

#### fabric morley

#### fabric saten

|
Bainbridge/gpt2-kl_01_07_hscnspecial-hs_cn
|
Bainbridge
| 2023-05-05T19:35:19Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T18:20:56Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-kl_01_07_hscnspecial-hs_cn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-kl_01_07_hscnspecial-hs_cn
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 21
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 73.5948 | 0.02 | 10 | 69.5786 |
| 46.1586 | 0.04 | 20 | 32.9619 |
| 13.6007 | 0.06 | 30 | 10.6513 |
| 6.8042 | 0.08 | 40 | 4.2289 |
| 2.8577 | 0.1 | 50 | 2.1080 |
| 1.447 | 0.12 | 60 | 1.1006 |
| 1.2972 | 0.14 | 70 | 0.9296 |
| 0.9482 | 0.16 | 80 | 0.7053 |
| 0.7817 | 0.18 | 90 | 0.7118 |
| 0.7763 | 0.2 | 100 | 0.6232 |
| 0.6719 | 0.22 | 110 | 0.5972 |
| 0.6852 | 0.24 | 120 | 0.5835 |
| 0.7033 | 0.26 | 130 | 0.5850 |
| 0.6782 | 0.28 | 140 | 0.5815 |
| 0.6635 | 0.3 | 150 | 0.5757 |
| 0.6405 | 0.32 | 160 | 0.5796 |
| 0.5739 | 0.34 | 170 | 0.5705 |
| 0.7139 | 0.36 | 180 | 0.5606 |
| 0.6883 | 0.38 | 190 | 0.5592 |
| 0.6429 | 0.4 | 200 | 0.5586 |
| 0.7397 | 0.42 | 210 | 0.5511 |
| 0.6993 | 0.44 | 220 | 0.5484 |
| 0.5946 | 0.46 | 230 | 0.5515 |
| 0.6172 | 0.48 | 240 | 0.5473 |
| 0.6077 | 0.5 | 250 | 0.5442 |
| 0.6148 | 0.52 | 260 | 0.5435 |
| 0.6213 | 0.54 | 270 | 0.5425 |
| 0.6431 | 0.56 | 280 | 0.5414 |
| 0.6459 | 0.58 | 290 | 0.5392 |
| 0.604 | 0.6 | 300 | 0.5394 |
| 0.603 | 0.62 | 310 | 0.5368 |
| 0.7207 | 0.64 | 320 | 0.5387 |
| 0.5689 | 0.66 | 330 | 0.5407 |
| 0.5721 | 0.68 | 340 | 0.5377 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.12.0a0+bd13bc6
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cuadron11/5
|
cuadron11
| 2023-05-05T19:16:37Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-05T17:44:27Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: '5'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 5
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2255
- Precision: 0.6432
- Recall: 0.595
- F1: 0.6182
- Accuracy: 0.9709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 29 | 0.3210 | 0.0 | 0.0 | 0.0 | 0.9324 |
| No log | 2.0 | 58 | 0.2694 | 0.0 | 0.0 | 0.0 | 0.9324 |
| No log | 3.0 | 87 | 0.2216 | 0.0 | 0.0 | 0.0 | 0.9316 |
| No log | 4.0 | 116 | 0.2115 | 0.25 | 0.035 | 0.0614 | 0.9403 |
| No log | 5.0 | 145 | 0.1740 | 0.3465 | 0.175 | 0.2326 | 0.9512 |
| No log | 6.0 | 174 | 0.1589 | 0.42 | 0.315 | 0.36 | 0.9566 |
| No log | 7.0 | 203 | 0.1514 | 0.4797 | 0.295 | 0.3653 | 0.9584 |
| No log | 8.0 | 232 | 0.1686 | 0.4576 | 0.405 | 0.4297 | 0.9624 |
| No log | 9.0 | 261 | 0.1840 | 0.5971 | 0.415 | 0.4897 | 0.9646 |
| No log | 10.0 | 290 | 0.1571 | 0.5505 | 0.545 | 0.5477 | 0.9646 |
| No log | 11.0 | 319 | 0.1809 | 0.6158 | 0.545 | 0.5782 | 0.9700 |
| No log | 12.0 | 348 | 0.1763 | 0.6129 | 0.57 | 0.5907 | 0.9681 |
| No log | 13.0 | 377 | 0.1902 | 0.5571 | 0.61 | 0.5823 | 0.9655 |
| No log | 14.0 | 406 | 0.1916 | 0.5842 | 0.555 | 0.5692 | 0.9673 |
| No log | 15.0 | 435 | 0.1895 | 0.6335 | 0.605 | 0.6189 | 0.9697 |
| No log | 16.0 | 464 | 0.1951 | 0.5880 | 0.635 | 0.6106 | 0.9667 |
| No log | 17.0 | 493 | 0.1918 | 0.6324 | 0.585 | 0.6078 | 0.9702 |
| 0.0838 | 18.0 | 522 | 0.1957 | 0.6020 | 0.605 | 0.6035 | 0.9699 |
| 0.0838 | 19.0 | 551 | 0.1886 | 0.6 | 0.6 | 0.6 | 0.9681 |
| 0.0838 | 20.0 | 580 | 0.1992 | 0.6158 | 0.585 | 0.6 | 0.9702 |
| 0.0838 | 21.0 | 609 | 0.2043 | 0.625 | 0.6 | 0.6122 | 0.9706 |
| 0.0838 | 22.0 | 638 | 0.2110 | 0.6243 | 0.59 | 0.6067 | 0.9707 |
| 0.0838 | 23.0 | 667 | 0.2121 | 0.6421 | 0.61 | 0.6256 | 0.9714 |
| 0.0838 | 24.0 | 696 | 0.2112 | 0.6455 | 0.61 | 0.6272 | 0.9713 |
| 0.0838 | 25.0 | 725 | 0.2150 | 0.6392 | 0.62 | 0.6294 | 0.9711 |
| 0.0838 | 26.0 | 754 | 0.2229 | 0.6264 | 0.57 | 0.5969 | 0.9702 |
| 0.0838 | 27.0 | 783 | 0.2219 | 0.6339 | 0.58 | 0.6057 | 0.9706 |
| 0.0838 | 28.0 | 812 | 0.2239 | 0.6429 | 0.585 | 0.6126 | 0.9707 |
| 0.0838 | 29.0 | 841 | 0.2211 | 0.6402 | 0.605 | 0.6221 | 0.9713 |
| 0.0838 | 30.0 | 870 | 0.2230 | 0.6364 | 0.595 | 0.6150 | 0.9709 |
| 0.0838 | 31.0 | 899 | 0.2244 | 0.6432 | 0.595 | 0.6182 | 0.9709 |
| 0.0838 | 32.0 | 928 | 0.2255 | 0.6432 | 0.595 | 0.6182 | 0.9709 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Parcurcik/code
|
Parcurcik
| 2023-05-05T19:07:26Z | 147 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T19:03:52Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: code
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code
This model is a fine-tuned version of [sberbank-ai/rugpt3medium_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3medium_based_on_gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1352
- Accuracy: 0.7530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
pcalhoun/LLaMA-7b-JonathanSwift
|
pcalhoun
| 2023-05-05T18:58:30Z | 0 | 0 |
Transformers
|
[
"Transformers",
"text2text-generation",
"license:apache-2.0",
"region:us"
] |
text2text-generation
| 2023-05-05T15:28:54Z |
---
license: apache-2.0
library_name: Transformers
pipeline_tag: text2text-generation
---
Uses PEFT weights: https://huggingface.co/blog/peft
Finetuning on about 4500 strings of [modern translation of Jonathan Swift string] + " |SWIFTIFY| " + [Jonathan Swift text] + " |END|"
Some dataset examples:
> How has he been bombarded, harassed, and battered by just one smart aleck who promises never to leave him, dead or alive! |SWIFTIFY| How hath he been pelted, pestered, and pounded by one single wag, who promiseth never to forsake him living or dead! |END|
> The third advantage is the convenience of secure harbors and ports for transporting their own finished goods and importing those of others that are minimally processed, as mutual trade demands. |SWIFTIFY| The third, is the conveniency of safe ports and havens, to carry out their own goods, as much manufactured, and bring in those of others, as little manufactured as the nature of mutual commerce will allow. |END|
Some example output for novel input sentences:
> The spectroscopy part of what you do — spectroscopes — you can use them to judge if food is rotten, if a painting is authentic, if something is an explosive. What do you use them to do? |SWIFTIFY| That branch of your art which you call the Spectra-physics; -- Spectroscope: You may apply those instruments in judging whether meat be putrid or not, whether a picture be genuine or counterfeit, and so on with other things that are called Explosives. Now, what service doth all this perform? |END|
> Colorado Parks and Wildlife has a comprehensive list of animals you can own without a permit in Colorado, from dogs and cats to red kangaroos and wallaroos. If it isn’t on the list, you need a permit. Capybaras are not on the list. |SWIFTIFY| The Colorado Division of Parks & Wildlife lists all domesticated animals that may be kept within the state's borders without a license: Dogs, Cats, Red Kangaroo, Wallaroo, etc., but no capybara is listed. |END|
> And I think that's the true beauty of a good film: the allegory, the moral, the true message is at the core – it's not fed to us directly – and it's up to the viewer to take what they need from it. |SWIFTIFY| And this, I believe, is the real excellence in all good pictures; for the allegorical, the moral, or the true lesson lies at the bottom -- it is not poured down our throats -- but left to be taken as we please. |END|
|
vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-tipo
|
vg055
| 2023-05-05T18:37:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T17:06:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-tipo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-tipo
This model is a fine-tuned version of [vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation](https://huggingface.co/vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0472
- F1: 0.9902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0479 | 1.0 | 14159 | 0.0521 | 0.9878 |
| 0.0154 | 2.0 | 28318 | 0.0472 | 0.9902 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AppledaMapple/Reenatest
|
AppledaMapple
| 2023-05-05T18:21:22Z | 43 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-14T14:30:31Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
- diffusers
inference: true
---
## Reena Character Model
A model trained on OC character Reena

|
mHossain/bangla-para-v1-350000
|
mHossain
| 2023-05-05T18:19:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-05T14:25:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bangla-para-v1-350000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bangla-para-v1-350000
This model is a fine-tuned version of [mHossain/bangla-para-v1-320000](https://huggingface.co/mHossain/bangla-para-v1-320000) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9279
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 18.3427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.1838 | 1.0 | 6750 | 0.9279 | 0.0 | 0.0 | 0.0 | 0.0 | 18.3427 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
danny3/codehelper-ds
|
danny3
| 2023-05-05T18:11:00Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-04T21:41:08Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codehelper-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codehelper-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
safwaan21/q-FrozenLake-v1-4x4-noSlippery
|
safwaan21
| 2023-05-05T17:52:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-05T17:52:49Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="safwaan21/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
thejuuuuu/torgo_b6_2
|
thejuuuuu
| 2023-05-05T17:36:46Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-05T17:35:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: torgo_b6_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_b6_2
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 80
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
pnparam/torgo_hel2
|
pnparam
| 2023-05-05T17:31:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-05T15:36:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: torgo_hel2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_hel2
This model is a fine-tuned version of [pnparam/torgo_healthy_2_40](https://huggingface.co/pnparam/torgo_healthy_2_40) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 80
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
EExe/my_ppo_LunarLand
|
EExe
| 2023-05-05T17:25:42Z | 0 | 0 | null |
[
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-05T17:25:20Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -30.85 +/- 140.48
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'debug'
'gym_id': 'LunarLander-v2'
'hface_repo': 'EExe/my_ppo_LunarLand'
'learning_rate': 0.00025
'seed': 42
'total_timesteps': 2000000
'num_envs': 4
'num_steps': 1024
'gamma': 0.999
'gae_lambda': 0.98
'num_minibatches': 64
'num_epochs': 10
'clip_coeff': 0.2
'ent_coeff': 0.1
'vf_coeff': 0.1
'max_gradient_norm': 0.5
'target_kl': 0.015}
```
|
safwaan21/ppo-LunarLander-v2
|
safwaan21
| 2023-05-05T17:24:18Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-26T20:48:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.01 +/- 17.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RamAnanth1/sd-1-5-ram
|
RamAnanth1
| 2023-05-05T17:17:40Z | 46 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-08T04:42:30Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### SD-1-5-Ram Dreambooth model trained by RamAnanth1 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Bainbridge/gpt2-kl_01_05_hscnspecial-hs_cn
|
Bainbridge
| 2023-05-05T17:15:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T16:10:18Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-kl_01_05_hscnspecial-hs_cn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-kl_01_05_hscnspecial-hs_cn
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 21
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 73.5512 | 0.02 | 10 | 69.5664 |
| 46.0783 | 0.04 | 20 | 32.9183 |
| 13.5625 | 0.06 | 30 | 10.6417 |
| 6.8251 | 0.08 | 40 | 4.2115 |
| 3.5615 | 0.1 | 50 | 2.0291 |
| 1.4471 | 0.12 | 60 | 1.0824 |
| 1.1156 | 0.14 | 70 | 0.9043 |
| 0.9839 | 0.16 | 80 | 0.7788 |
| 0.75 | 0.18 | 90 | 0.6556 |
| 0.7311 | 0.2 | 100 | 0.6302 |
| 0.6331 | 0.22 | 110 | 0.6142 |
| 0.6426 | 0.24 | 120 | 0.6279 |
| 0.7058 | 0.26 | 130 | 0.5762 |
| 0.6443 | 0.28 | 140 | 0.5702 |
| 0.6437 | 0.3 | 150 | 0.5679 |
| 0.6041 | 0.32 | 160 | 0.5654 |
| 0.5403 | 0.34 | 170 | 0.5669 |
| 0.6511 | 0.36 | 180 | 0.5710 |
| 0.651 | 0.38 | 190 | 0.5604 |
| 0.6074 | 0.4 | 200 | 0.5608 |
| 0.7023 | 0.42 | 210 | 0.5603 |
| 0.6567 | 0.44 | 220 | 0.5581 |
| 0.5573 | 0.46 | 230 | 0.5606 |
| 0.5748 | 0.48 | 240 | 0.5573 |
| 0.5728 | 0.5 | 250 | 0.5559 |
| 0.6418 | 0.52 | 260 | 0.5447 |
| 0.5855 | 0.54 | 270 | 0.5406 |
| 0.6127 | 0.56 | 280 | 0.5423 |
| 0.6065 | 0.58 | 290 | 0.5415 |
| 0.5687 | 0.6 | 300 | 0.5409 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.12.0a0+bd13bc6
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Norod78/sd15-fluentui-emoji
|
Norod78
| 2023-05-05T17:14:27Z | 42 | 7 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"en",
"dataset:Norod78/microsoft-fluentui-emoji-512-whitebg",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-02T05:31:23Z |
---
license: creativeml-openrail-m
language:
- en
thumbnail: "https://huggingface.co/Norod78/sd15-fluentui-emoji/resolve/main/sample_images/sd15-fluentui-emoji-Thumbnail.png"
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
datasets:
- Norod78/microsoft-fluentui-emoji-512-whitebg
inference: true
widget:
- text: A cute dog flat
- text: A cute dog high contrast
- text: A cute dog color
- text: A cute dog
---
# SDv1.5 sd15-fluentui-emoji model
### Stable-Diffusion v1.5 fine-tuned for 10k steps using [Huggingface Diffusers train_text_to_image script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) upon [Norod78/microsoft-fluentui-emoji-512-whitebg](https://huggingface.co/datasets/Norod78/microsoft-fluentui-emoji-512-whitebg)
# The Emoji file names were converted to become the text descriptions. It made the model learn a few special words: "flat", "high contrast" and "color"

## A few sample pictures generated with this model are available [here](https://huggingface.co/Norod78/sd15-fluentui-emoji/tree/main/sample_images)
|
Norod78/SD15-VinageStyle
|
Norod78
| 2023-05-05T17:14:21Z | 42 | 15 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"en",
"dataset:Norod78/vintage-blip-captions",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-21T21:01:18Z |
---
license: creativeml-openrail-m
language:
- en
thumbnail: "https://huggingface.co/Norod78/SD15-VinageStyle/resolve/main/sample_images/SD15-VintageStyle-Thumbnail.jpg"
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
datasets:
- Norod78/vintage-blip-captions
inference: true
widget:
- text: A Pulp Cover featuring Gal Gadot, very detailed, clean, high quality, sharp image, Saturno Butto
- text: A photo of an astronaut riding a horse on mars, Vintage style, Pulp Cover, very detailed, clean, high quality, sharp image, Dave Dorman
- text: A beatiful person, Vintage face
- text: A Vintage style commercial for cat food
---
# SDv1.5 SD15-VinageStyle model, trained by Norod78 in two parts.
### First Stable-Diffusion v1.5 fine-tuned for 10k steps using [Huggingface Diffusers train_text_to_image script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) upon [Norod78/vintage-blip-captions](https://huggingface.co/datasets/Norod78/vintage-blip-captions) then it underwent further fine tuning with Dreambooth using the same images as the ones in the dataset but rather then having it blip-captioned, it was split into "Vintage style", "Vintage face" and "Pulp cover" concepts.
### Dreambooth model was trained with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
## Because the model was first fined-tuned on the whole dataset and only then it was fine-tuned again to learn each individual concept, you can use prompts without Trigger-Words and still get a subtle "Vintage" touch
# Trigger-Words are: "Vintage", "Vintage style", "Vintage face", "Pulp cover"

## A few sample pictures generated with this mode (more available [here](https://huggingface.co/Norod78/SD15-VinageStyle/tree/main/sample_images)):
A photo of Gal Gadot as wonderwoman, Vintage style, very detailed, clean, high quality, sharp image.Negative prompt: grainy, blurry, text, watermark, inconsistent, smudged.Steps: 40, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 3486356206, Face restoration: CodeFormer, Size: 512x512, Model hash: 33006be6, Model: VintageStyle, Batch size: 4, Batch pos: 2

A photo of Gal Gadot as wonderwoman fighting against Cthulhu, Vintage, very detailed, clean, high quality, sharp image, ,Naoto Hattori.Negative prompt: grainy, blurry, text, watermark, inconsistent, smudged.Steps: 40, Sampler: DPM++ 2M Karras, CFG scale: 7.5, Seed: 3408435550, Face restoration: CodeFormer, Size: 512x512, Model hash: 33006be6, Model: VintageStyle, Batch size: 4, Batch pos: 3

|
Norod78/claymationxmas
|
Norod78
| 2023-05-05T17:14:10Z | 36 | 7 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"en",
"dataset:Norod78/ChristmasClaymation-blip-captions",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-12-14T17:51:54Z |
---
license: creativeml-openrail-m
language:
- en
thumbnail: "https://huggingface.co/Norod78/claymationxmas/resolve/main/sample_images/grid-0167-3362370631-A_TARDIS_in_the_snow_with_Santa_ClaymationXmas,_very_detailed,_clean,_high_quality,_sharp_image.jpg"
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
datasets:
- Norod78/ChristmasClaymation-blip-captions
inference: true
widget:
- text: A Gal Gadot in the snow with Santa ClaymationXmas, very detailed, clean, high quality, sharp image
- text: A TARDIS in the snow with Santa ClaymationXmas, very detailed, clean, high quality, sharp image
---
### SDv1.5 ClaymationXmas Dreambooth model trained by Norod78 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
## Use ClaymationXmas in your prompt
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:


|
Norod78/sd-simpsons-model
|
Norod78
| 2023-05-05T17:14:04Z | 39 | 17 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"dataset:Norod78/simpsons-blip-captions",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-10T16:41:01Z |
---
license: creativeml-openrail-m
language:
- en
thumbnail: "https://huggingface.co/Norod78/sd-simpsons-model/raw/main/examples/00496-2202810362-A%20beautiful%20hungry%20demon%20girl,%20John%20Philip%20Falter,%20Very%20detailed%20painting,%20Mark%20Ryden.jpg"
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
datasets:
- Norod78/simpsons-blip-captions
inference: true
---
# Simpsons diffusion
*Stable Diffusion fine tuned on images related to "The Simpsons"
If you want more details on how to generate your own blip cpationed dataset see this [colab](https://colab.research.google.com/gist/Norod/ee6ee3c4bf11c2d2be531d728ec30824/buildimagedatasetwithblipcaptionsanduploadtohf.ipynb)
Training was done using a slightly modified version of Hugging-Face's text to image training [example script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py)
## About
Put in a text prompt and generate cartoony/simpsony images
**A beautiful hungry demon girl, John Philip Falter, Very detailed painting, Mark Ryden**

**Gal Gadot, cartoon**

## More examples
The [examples](https://huggingface.co/Norod78/sd-simpsons-model/tree/main/examples) folder contains a few images generated by this model's ckpt file using [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) which means their EXIF info contain the parameter used to generate them
## Sample code
```py
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
import torch
# this will substitute the default PNDM scheduler for K-LMS
lms = LMSDiscreteScheduler(
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear"
)
guidance_scale=9
seed=7777
steps=100
model_id = "Norod78/sd-simpsons-model"
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=lms, torch_dtype=torch.float16)
pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
## Dataset and Training
Finetuned for 10,000 iterations upon [Runway ML's Stable-Diffusion v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on [BLIP captioned Simpsons images](https://huggingface.co/datasets/Norod78/simpsons-blip-captions) using 1xA5000 GPU on my home desktop computer
Trained by [@Norod78](https://twitter.com/Norod78)
|
Norod78/sd21-hearthstone-cards
|
Norod78
| 2023-05-05T17:13:58Z | 15 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"stable-diffusion-diffusers",
"en",
"dataset:Norod78/hearthstone-cards-512",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-01-06T12:02:06Z |
---
license: creativeml-openrail-m
language:
- en
thumbnail: "https://huggingface.co/Norod78/sd21-hearthstone-cards/resolve/main/sample_images/00005-166904889-Snoop%20Dogg%20music%20power%20Hearthstone%20card.png"
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
datasets:
- Norod78/hearthstone-cards-512
inference: true
widget:
- text: 3 Cute dog, Fluff. Hearthstone card
- text: Gal Gadot Super Wonderwoman power. Hearthstone card
- text: Cute Pikachu Pokemon Electricity buzzzz Hearthstone card
- text: 4 Snoop Dogg music power Hearthstone card
library_name: diffusers
pipeline_tag: text-to-image
---
# SDv2.1 sd21-hearthstone-cards model
### Stable-Diffusion v2.1 fine-tuned for 10k steps using [Huggingface Diffusers train_text_to_image script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) upon [Norod78/hearthstone-cards-512](https://huggingface.co/datasets/Norod78/hearthstone-cards-512)
# Stable-Diffusion Hearthstone card generator. First digit in prompt controls the Mana-cost (pretty well) then card name, then special ability and description, then "Hearthstone card".

## A few sample pictures generated with this model are available [here](https://huggingface.co/Norod78/sd21-hearthstone-cards/tree/main/sample_images)
Please note that the entire training set contains actual Hearthstone card images which are copyrighted by Blizzard
So it is possible that the generated images contain copyrighted elements and should only be use for your private entertainment
Trained by [@Norod78](https://twitter.com/Norod78)
|
nolanaatama/skmchnstyl
|
nolanaatama
| 2023-05-05T17:04:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-05T16:59:48Z |
---
license: creativeml-openrail-m
---
|
rifatul123/Primary_doctor_v1
|
rifatul123
| 2023-05-05T16:57:39Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"gpt2",
"biology",
"medical",
"chemistry",
"text-generation-inference",
"text-generation",
"en",
"region:us"
] |
text-generation
| 2023-05-05T08:35:44Z |
---
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-generation
tags:
- biology
- medical
- chemistry
- text-generation-inference
---





# Model Card for Model ID
This modelcard describes a fine-tuned GPT-2 language model for medical research using a personally collected dataset.
The model is intended for text generation in the medical research domain.
## Model Details
This modelcard describes a fine-tuned GPT-2 language model for medical research using a personally collected dataset. The model is intended for text generation in the medical research domain.
### Model Description
The model has been fine-tuned on a GPT-2 architecture and trained with a task-specific parameter for text generation. The do_sample parameter is set to true, which means that the model can generate text on its own rather than simply copying from the input. The max_length parameter is set to 50, which means that the maximum length of the generated text will be 50 tokens.
- **Developed by:** [OpenAI]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [Language Model]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [GPT-2]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
### Direct Use
This model can be used for text generation in the medical research domain. It can be used to generate text for a variety of purposes, such as research papers, reports, and summaries.
### Downstream Use [optional]
The model can be fine-tuned for downstream tasks such as summarization, question answering, and text classification.
### Out-of-Scope Use
This model may not perform as well on text outside the medical research domain. It is important to carefully evaluate the generated text to ensure that it is appropriate for the intended use.
## Bias, Risks, and Limitations
This modelcard acknowledges that all language models have limitations and potential biases. The model may produce biased or inaccurate outputs if the input data contains bias or if the training data is not diverse enough. The risks of using the model include the possibility of generating misleading or harmful information.
### Recommendations
To mitigate potential risks and limitations, users of the model should carefully evaluate the generated text and consider the following recommendations:
1)Evaluate the input data for potential bias and ensure that it is diverse and representative.
2)Consider fine-tuning the model on additional data to improve its accuracy and reduce the risk of bias.
3)Review and edit the generated text before use to ensure that it is appropriate for the intended purpose.
4)Provide clear and transparent documentation of the model's limitations and potential biases to users and stakeholders.
## How to Get Started with the Model
To use the model, load it in your preferred programming language using the transformers library, and pass in the input text. The model will generate text based on the input, using the task-specific parameters that have been set.
|
4bd4774h/bert-base-uncased-finetuned-cola
|
4bd4774h
| 2023-05-05T16:54:05Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T15:20:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5815775806078913
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0375
- Matthews Correlation: 0.5816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.999174630178768e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4594 | 1.0 | 1069 | 0.4619 | 0.5155 |
| 0.3105 | 2.0 | 2138 | 0.5069 | 0.5807 |
| 0.2003 | 3.0 | 3207 | 1.0033 | 0.5524 |
| 0.1074 | 4.0 | 4276 | 1.0375 | 0.5816 |
| 0.0715 | 5.0 | 5345 | 1.1228 | 0.5743 |
| 0.0355 | 6.0 | 6414 | 1.3127 | 0.5728 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridad
|
vg055
| 2023-05-05T16:52:06Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-05T10:27:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridad
This model is a fine-tuned version of [vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation](https://huggingface.co/vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5996
- F1: 0.7468
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5823 | 1.0 | 14159 | 0.5671 | 0.7452 |
| 0.4536 | 2.0 | 28318 | 0.5996 | 0.7468 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cuadron11/modelBsc5
|
cuadron11
| 2023-05-05T16:43:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-05T15:52:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: modelBsc5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelBsc5
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1546
- Precision: 0.5567
- Recall: 0.6075
- F1: 0.5810
- Accuracy: 0.9708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 29 | 0.2697 | 0.0 | 0.0 | 0.0 | 0.9446 |
| No log | 2.0 | 58 | 0.2357 | 0.0 | 0.0 | 0.0 | 0.9446 |
| No log | 3.0 | 87 | 0.2176 | 0.0 | 0.0 | 0.0 | 0.9446 |
| No log | 4.0 | 116 | 0.1889 | 0.0541 | 0.0108 | 0.0179 | 0.9406 |
| No log | 5.0 | 145 | 0.1800 | 0.1613 | 0.0269 | 0.0461 | 0.9477 |
| No log | 6.0 | 174 | 0.1625 | 0.2745 | 0.0753 | 0.1181 | 0.9532 |
| No log | 7.0 | 203 | 0.1502 | 0.4306 | 0.1667 | 0.2403 | 0.9571 |
| No log | 8.0 | 232 | 0.1426 | 0.3810 | 0.2581 | 0.3077 | 0.9576 |
| No log | 9.0 | 261 | 0.1459 | 0.5586 | 0.3333 | 0.4175 | 0.9638 |
| No log | 10.0 | 290 | 0.1288 | 0.5732 | 0.5054 | 0.5371 | 0.9636 |
| No log | 11.0 | 319 | 0.1241 | 0.4769 | 0.5538 | 0.5124 | 0.9638 |
| No log | 12.0 | 348 | 0.1282 | 0.5189 | 0.5161 | 0.5175 | 0.9664 |
| No log | 13.0 | 377 | 0.1335 | 0.4232 | 0.6667 | 0.5177 | 0.9595 |
| No log | 14.0 | 406 | 0.1314 | 0.5854 | 0.5161 | 0.5486 | 0.9693 |
| No log | 15.0 | 435 | 0.1357 | 0.4772 | 0.6183 | 0.5386 | 0.9653 |
| No log | 16.0 | 464 | 0.1382 | 0.4398 | 0.6290 | 0.5177 | 0.9638 |
| No log | 17.0 | 493 | 0.1362 | 0.5634 | 0.6452 | 0.6015 | 0.9705 |
| 0.1141 | 18.0 | 522 | 0.1466 | 0.5 | 0.6505 | 0.5654 | 0.9669 |
| 0.1141 | 19.0 | 551 | 0.1497 | 0.5441 | 0.5968 | 0.5692 | 0.9703 |
| 0.1141 | 20.0 | 580 | 0.1375 | 0.5520 | 0.6559 | 0.5995 | 0.9700 |
| 0.1141 | 21.0 | 609 | 0.1373 | 0.5707 | 0.6290 | 0.5985 | 0.9722 |
| 0.1141 | 22.0 | 638 | 0.1490 | 0.5777 | 0.6398 | 0.6071 | 0.9710 |
| 0.1141 | 23.0 | 667 | 0.1550 | 0.5314 | 0.6828 | 0.5976 | 0.9679 |
| 0.1141 | 24.0 | 696 | 0.1506 | 0.5397 | 0.6935 | 0.6071 | 0.9679 |
| 0.1141 | 25.0 | 725 | 0.1528 | 0.5346 | 0.6237 | 0.5757 | 0.9703 |
| 0.1141 | 26.0 | 754 | 0.1503 | 0.5764 | 0.6290 | 0.6015 | 0.9715 |
| 0.1141 | 27.0 | 783 | 0.1446 | 0.5767 | 0.5860 | 0.5813 | 0.9722 |
| 0.1141 | 28.0 | 812 | 0.1486 | 0.5377 | 0.6129 | 0.5729 | 0.9701 |
| 0.1141 | 29.0 | 841 | 0.1538 | 0.5577 | 0.6237 | 0.5888 | 0.9703 |
| 0.1141 | 30.0 | 870 | 0.1543 | 0.5545 | 0.6022 | 0.5773 | 0.9701 |
| 0.1141 | 31.0 | 899 | 0.1551 | 0.5517 | 0.6022 | 0.5758 | 0.9708 |
| 0.1141 | 32.0 | 928 | 0.1546 | 0.5567 | 0.6075 | 0.5810 | 0.9708 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cuadron11/modelBeto5
|
cuadron11
| 2023-05-05T16:43:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-05T15:52:22Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: modelBeto5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelBeto5
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1686
- Precision: 0.5990
- Recall: 0.6541
- F1: 0.6253
- Accuracy: 0.9727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 32
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 29 | 0.2706 | 0.0 | 0.0 | 0.0 | 0.9451 |
| No log | 2.0 | 58 | 0.3328 | 0.0 | 0.0 | 0.0 | 0.9451 |
| No log | 3.0 | 87 | 0.1872 | 0.0476 | 0.0108 | 0.0176 | 0.9320 |
| No log | 4.0 | 116 | 0.1428 | 0.3971 | 0.1459 | 0.2134 | 0.9551 |
| No log | 5.0 | 145 | 0.1169 | 0.4690 | 0.2865 | 0.3557 | 0.9614 |
| No log | 6.0 | 174 | 0.1259 | 0.5414 | 0.5297 | 0.5355 | 0.9629 |
| No log | 7.0 | 203 | 0.1166 | 0.4575 | 0.6108 | 0.5231 | 0.9604 |
| No log | 8.0 | 232 | 0.1240 | 0.6149 | 0.4919 | 0.5465 | 0.9693 |
| No log | 9.0 | 261 | 0.1145 | 0.5276 | 0.5676 | 0.5469 | 0.9681 |
| No log | 10.0 | 290 | 0.1377 | 0.5612 | 0.5946 | 0.5774 | 0.9688 |
| No log | 11.0 | 319 | 0.1321 | 0.5833 | 0.6432 | 0.6118 | 0.9700 |
| No log | 12.0 | 348 | 0.1549 | 0.6581 | 0.5514 | 0.6 | 0.9717 |
| No log | 13.0 | 377 | 0.1482 | 0.6080 | 0.6541 | 0.6302 | 0.9713 |
| No log | 14.0 | 406 | 0.1589 | 0.5348 | 0.6649 | 0.5928 | 0.9675 |
| No log | 15.0 | 435 | 0.1507 | 0.6178 | 0.6378 | 0.6277 | 0.9720 |
| No log | 16.0 | 464 | 0.1554 | 0.6082 | 0.6378 | 0.6227 | 0.9720 |
| No log | 17.0 | 493 | 0.1658 | 0.5918 | 0.6270 | 0.6089 | 0.9708 |
| 0.0785 | 18.0 | 522 | 0.1616 | 0.5792 | 0.6919 | 0.6305 | 0.9715 |
| 0.0785 | 19.0 | 551 | 0.1632 | 0.6059 | 0.6649 | 0.6340 | 0.9717 |
| 0.0785 | 20.0 | 580 | 0.1638 | 0.6103 | 0.6432 | 0.6263 | 0.9726 |
| 0.0785 | 21.0 | 609 | 0.1603 | 0.6010 | 0.6432 | 0.6214 | 0.9724 |
| 0.0785 | 22.0 | 638 | 0.1652 | 0.6078 | 0.6703 | 0.6375 | 0.9722 |
| 0.0785 | 23.0 | 667 | 0.1577 | 0.6440 | 0.6649 | 0.6543 | 0.9738 |
| 0.0785 | 24.0 | 696 | 0.1600 | 0.6492 | 0.6703 | 0.6596 | 0.9743 |
| 0.0785 | 25.0 | 725 | 0.1663 | 0.6256 | 0.6595 | 0.6421 | 0.9733 |
| 0.0785 | 26.0 | 754 | 0.1686 | 0.6106 | 0.6865 | 0.6463 | 0.9713 |
| 0.0785 | 27.0 | 783 | 0.1691 | 0.5951 | 0.6595 | 0.6256 | 0.9720 |
| 0.0785 | 28.0 | 812 | 0.1668 | 0.61 | 0.6595 | 0.6338 | 0.9731 |
| 0.0785 | 29.0 | 841 | 0.1679 | 0.5931 | 0.6541 | 0.6221 | 0.9724 |
| 0.0785 | 30.0 | 870 | 0.1678 | 0.6162 | 0.6595 | 0.6371 | 0.9734 |
| 0.0785 | 31.0 | 899 | 0.1683 | 0.6040 | 0.6595 | 0.6305 | 0.9729 |
| 0.0785 | 32.0 | 928 | 0.1686 | 0.5990 | 0.6541 | 0.6253 | 0.9727 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
stanford-rl/PPO-LunarLander-v2
|
stanford-rl
| 2023-05-05T16:40:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-05T16:39:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.73 +/- 15.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
apurik-parv/abstract-nature-pattern-v1-2
|
apurik-parv
| 2023-05-05T16:25:27Z | 34 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-26T17:16:27Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### abstract_nature_pattern_v1.2 Dreambooth model trained by apurik-parv with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
##### This model is trained with more data but unfortunately it overfits the image. I am not aware of the extent of overfitting some images are good some are really bad. Please leave suggestions in comment.
Inference Prompt: **abnapa**
The model is trained on 1024x1024 images for a total step of 3000.
|
tang11/tang
|
tang11
| 2023-05-05T16:20:35Z | 0 | 0 | null |
[
"medical",
"translation",
"dataset:BAAI/COIG",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] |
translation
| 2023-05-05T16:19:01Z |
---
license: apache-2.0
datasets:
- BAAI/COIG
metrics:
- bleurt
pipeline_tag: translation
tags:
- medical
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hannahh7/ppo-LunarLander-v2
|
hannahh7
| 2023-05-05T16:15:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-05T16:15:34Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.14 +/- 19.62
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mbshaikh/my_awesome_mind_model
|
mbshaikh
| 2023-05-05T16:10:59Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-05-05T16:05:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6548
- Accuracy: 0.0619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6339 | 0.0708 |
| No log | 1.87 | 7 | 2.6405 | 0.0531 |
| 2.6345 | 2.93 | 11 | 2.6415 | 0.0354 |
| 2.6345 | 4.0 | 15 | 2.6437 | 0.0531 |
| 2.6345 | 4.8 | 18 | 2.6504 | 0.0354 |
| 2.6263 | 5.87 | 22 | 2.6528 | 0.0531 |
| 2.6263 | 6.93 | 26 | 2.6548 | 0.0531 |
| 2.6155 | 8.0 | 30 | 2.6548 | 0.0619 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ayan4m1/trinart_diffusers_v2
|
ayan4m1
| 2023-05-05T16:10:04Z | 9 | 6 |
diffusers
|
[
"diffusers",
"license:apache-2.0",
"diffusers:LDMTextToImagePipeline",
"region:us"
] | null | 2022-09-14T15:40:03Z |
---
license: apache-2.0
---
This is a copy of [trinart_stable_diffusion_v2](https://huggingface.co/naclbit/trinart_stable_diffusion_v2) ported for use with the (diffusers)[https://github.com/huggingface/diffusers]) library.
All credit for this model goes to [naclbit](https://huggingface.co/naclbit).
|
Bainbridge/gpt2-kl_01_04_hscnspecial-hs_cn
|
Bainbridge
| 2023-05-05T16:09:30Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T14:39:55Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-kl_01_04_hscnspecial-hs_cn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-kl_01_04_hscnspecial-hs_cn
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 21
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 73.5419 | 0.02 | 10 | 69.5826 |
| 46.0912 | 0.04 | 20 | 32.9478 |
| 13.5577 | 0.06 | 30 | 10.6427 |
| 6.8429 | 0.08 | 40 | 4.2253 |
| 3.5596 | 0.1 | 50 | 2.0345 |
| 1.4308 | 0.12 | 60 | 1.0810 |
| 1.0569 | 0.14 | 70 | 0.8577 |
| 1.0017 | 0.16 | 80 | 0.8291 |
| 0.7285 | 0.18 | 90 | 0.6919 |
| 0.7232 | 0.2 | 100 | 0.6110 |
| 0.5979 | 0.22 | 110 | 0.6133 |
| 0.6368 | 0.24 | 120 | 0.6051 |
| 0.6606 | 0.26 | 130 | 0.6075 |
| 0.6428 | 0.28 | 140 | 0.5920 |
| 0.6106 | 0.3 | 150 | 0.5754 |
| 0.6218 | 0.32 | 160 | 0.5741 |
| 0.534 | 0.34 | 170 | 0.5633 |
| 0.6411 | 0.36 | 180 | 0.5685 |
| 0.6353 | 0.38 | 190 | 0.5805 |
| 0.5922 | 0.4 | 200 | 0.5556 |
| 0.6924 | 0.42 | 210 | 0.5595 |
| 0.6429 | 0.44 | 220 | 0.5601 |
| 0.5428 | 0.46 | 230 | 0.5583 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.12.0a0+bd13bc6
- Datasets 2.12.0
- Tokenizers 0.13.3
|
crumb/eva-fusion-v2.22
|
crumb
| 2023-05-05T16:01:46Z | 13 | 7 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-10-19T07:07:47Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: bigscience-bloom-rail-1.0
---
# EVA-Fusion 2.22 - You are (Not) HD
for an art project of mine
| model | init model | lr | bs | acc | total steps | training samples | training resolution | models trained |
|--- |--- | --- | --- | --- | --- | --- | --- | --- |
| eva-fusion-v2 |compvis 1-4| 2.057e-06 | 1 | 1 | 8,000 | 1,143 (captioned just "picture in the style of evangelion") | 512x384 (4:3) | unet |
| eva-fusion-v2.1 | eva-fusion-v2 | 1.313e-06 | 1 | 1 | 10,000 | + 560 (hand captioned) | 512x384 (4:3) | unet |
| eva-fusion-v2.22 "you are (not) HD" | previous models + compvis1-4 + wd1.3 at various ratios | x | x | x | x | x | x | x |
for generating let me suggest [Whatchamacallit](https://colab.research.google.com/github/aicrumb/whatchamacallit/blob/main/Whatchamacallit.ipynb), suggested in the negative prompt: "blurry jpeg, multiple views, simple background, black and white reference sheet".
it wasn't trained on black and white character sheets, it was barely trained on any grayscale images, I do not know why it leans heavily towards them sometimes.
link to ckpt file: https://huggingface.co/crumb/eva-model-ckpt/blob/main/test%20model%205.ckpt
I also recommend "in the style of Evangelion" in the prompt
|
doohickey/doohickey-mega-v1.01
|
doohickey
| 2023-05-05T16:01:31Z | 70 | 4 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-14T22:30:13Z |
---
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
---
Models better suited for High-Resolution Image Synthesis. The main model (doohickey/doohickey-mega) has been finetuned from [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) near a resolution of 768x768 (suggested method of generating from model is with [Doohickey](https://colab.research.google.com/github/aicrumb/doohickey/blob/main/Doohickey_Diffusion.ipynb)).
This is a version of Stable Diffusion finetuned to use [laion/CLIP-ViT-L-14-laion2B-s32B-b82K](https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K) instead of the stock openai clip model. While doohickey-mega finetuned the CLIP model as well, this ckpt was finetuned without the CLIP model being trained. In total 6000 steps.
_Limitations and Biases from Stable Diffusion also apply to this model._
<div style="font-size:10px">
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
</div>
|
doohickey/doohickey-mega
|
doohickey
| 2023-05-05T16:01:30Z | 118 | 4 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-12T17:40:10Z |
---
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
---
Models better suited for High-Resolution Image Synthesis. The main model (doohickey/doohickey-mega) has been finetuned from [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) near a resolution of 768x768 (suggested method of generating from model is with [Doohickey](https://colab.research.google.com/github/aicrumb/doohickey/blob/main/Doohickey_Diffusion.ipynb)).
Current models:
| name | description | datasets used |
| --- | --- | --- |
| doohickey/doohickey-mega/v1-3000steps.ckpt | first try, rlly good hd, bad results w/ other aspect ratios than 1:1 trained at 704x704 | A-1k|
| doohickey/doohickey-mega/v2-3000steps.ckpt | same as last one but worse | A-1k + ~1k samples from LAION-2b-En-Aesthetic >=768x768 |
| doohickey/doohickey-mega/v3-3000.ckpt | with new CLIP model ([laion/CLIP-ViT-L-14-laion2B-s32B-b82K](https://hf.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K)) (CLIP model also finetuned the 3k steps), models past this point were trained with various aspect ratios from 640x640 min to 768x768 max resolution. (examples 768x640 or 704x768) | A-1k + E-10k |
| doohickey/doohickey-mega/v3-6000.ckpt | 3k steps on top of v3-3000.ckpt, better at hands! (just UNet finetune, added a RandomHorizontalFlip operation at 50%) | A-1k |
| doohickey/doohickey-mega/v3-7000.ckpt | continuation of last model, I thought Colab would crash after 3k steps but it kept going for a little while saving ckpts every 1k steps. | A-1k |
| doohickey/doohickey-mega/v3-8000.ckpt | see last description, v3-6000 + 2k steps | A-1k |
The currently loaded model for diffusers is doohickey/doohickey-mega/v3-8000.ckpt
Datasets:
| name | description |
| --- | --- |
| A-1K | 1k scraped images, captioned with BLIP (more refined aesthetic) |
| E-10k | 10k scraped images captioned with BLIP (less refined aesthetic) |
_Limitations and Biases from Stable Diffusion also apply to this model._
<div style="font-size:10px">
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
</div>
|
doohickey/doodad-v1-2
|
doohickey
| 2023-05-05T16:01:22Z | 33 | 2 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-09-28T17:38:09Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: bigscience-bloom-rail-1.0
inference: false
---
# Doodad
*<p style="color:green"> <b> This is the 1st part of a 2 (3?) part project meant to be used with <a href="https://huggingface.co/doohickey">Doohickey</a> </b> </p>*
The style was trained with [Dreambooth-Stable](https://www.reddit.com/r/StableDiffusion/comments/xphaiw/dreambooth_stable_diffusion_training_in_just_125/) and is used with "<doodad>". Tt mixes especially well with the <midj-strong> style included in Doohickey. It's a finetuned version of the Trinart-Waifu-diffusion-50-50 included in this organization's models and was trained on 48 images from the author's ([crumb](https://huggingface.co/crumb)'s) Pinterest feed.
| Parameter | Value |
|----------------------|------------|
| resolution | 512 |
| train_batch_size | 1 |
| gradient_accumulation_steps | 2 |
| learning_rate | 5e-6 |
| num_class_images | 120 |
| max_train_steps | 1200 |
Example outputs:
|
doohickey/doodad-v1-1
|
doohickey
| 2023-05-05T16:01:20Z | 32 | 3 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-09-28T09:40:37Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: bigscience-bloom-rail-1.0
inference: false
---
# Doodad
*<p style="color:green"> <b> This is the 1st part of a 2 (3?) part project meant to be used with <a href="https://huggingface.co/doohickey">Doohickey</a> </b> </p>*
The style was trained with [Dreambooth-Stable](https://www.reddit.com/r/StableDiffusion/comments/xphaiw/dreambooth_stable_diffusion_training_in_just_125/) and is used with "imv" (an uncommonly used token recommended by the author I think) and it mixes especially well with the <midj-strong> style included in Doohickey. It's a finetuned version of the Trinart-Waifu-diffusion-50-50 included in this organization's models.
| Parameter | Value |
|----------------------|------------|
| resolution | 512 |
| train_batch_size | 1 |
| gradient_accumulation_steps | 2 |
| learning_rate | 5e-6 |
| num_class_images | 120 |
| max_train_steps | 1200 |
Example outputs:
"portal into another dimension"

"portrait of a dying god"

"photograph"

"The alchemist's laboratory by Greg Rutkowski and Claude Monet, oil on canvas"

|
krasserm/perceiver-ar-sam-giant-midi
|
krasserm
| 2023-05-05T15:56:09Z | 54 | 12 |
transformers
|
[
"transformers",
"pytorch",
"perceiver-ar-symbolic-audio-model",
"audio-to-audio",
"arxiv:2202.07765",
"arxiv:2104.09864",
"arxiv:1809.04281",
"license:apache-2.0",
"region:us"
] |
audio-to-audio
| 2023-05-03T08:26:15Z |
---
license: apache-2.0
inference: false
pipeline_tag: audio-to-audio
---
# Perceiver AR symbolic audio model
This model is a [Perceiver AR](https://arxiv.org/abs/2202.07765) symbolic audio model (134M parameters) pretrained on
the [GiantMIDI-Piano](https://github.com/bytedance/GiantMIDI-Piano) dataset for 27 epochs (157M tokens). It uses [rotary embedding](https://arxiv.org/abs/2104.09864)
for relative position encoding. It is a [training example](https://github.com/krasserm/perceiver-io/blob/main/docs/training-examples.md#giantmidi-piano)
of the [perceiver-io](https://github.com/krasserm/perceiver-io) library.
## Model description
Perceiver AR is a simple extension of a plain decoder-only transformer such as GPT-2, for example. A core building block
of both is the *decoder layer* consisting of a self-attention layer followed by a position-wise MLP. Self-attention uses
a causal attention mask.
Perceiver AR additionally cross-attends to a longer prefix of the input sequence in its first attention layer. This layer
is a hybrid self- and cross-attention layer. Self-attention is over the last n positions of the input sequence, with a
causal attention mask, cross-attention is from the last n positions to the first m positions. The length of the input
sequence is m + n. This allows a Perceiver AR to process a much larger context than decoder-only transformers which are
based on self-attention only.
<p align="center">
<img src="https://krasserm.github.io/img/2023-01-23/perceiver-ar.png" alt="Perceiver AR" width="600"/><br/>
<i>Fig. 1</i>. Attention in Perceiver AR with m=8 prefix tokens and n=3 latent tokens.
<p/>
The output of the hybrid attention layer are n latent arrays corresponding to the last n tokens of the input sequence.
These are further processed by a stack of L-1 decoder layers where the total number of attention layers is L. A final
layer (not shown in Fig. 1) predicts the target token for each latent position. The weights of the final layer are
shared with the input embedding layer. Except for the initial cross-attention to the prefix sequence, a Perceiver AR
is architecturally identical to a decoder-only transformer.
## Model training
The model was [trained](https://github.com/krasserm/perceiver-io/blob/main/docs/training-examples.md#giantmidi-piano) with
the task of symbolic audio modeling on the [GiantMIDI-Piano](https://github.com/bytedance/GiantMIDI-Piano) dataset
for 27 epochs (157M tokens). This dataset consists of [MIDI](https://en.wikipedia.org/wiki/MIDI) files, tokenized using the
approach from the [Perceiver AR paper](https://arxiv.org/pdf/2202.07765.pdf), which is described
in detail in Section A.2 of [Huang et al (2019)](https://arxiv.org/abs/1809.04281).
All hyperparameters are summarized in the [training script](https://github.com/krasserm/perceiver-io/blob/main/examples/training/sam/giantmidi/train.sh).
The context length was set to 6144 tokens with 2048 latent positions, resulting in a maximal prefix length of 4096. The
actual prefix length per example was randomly chosen between 0 and 4096. Training was done with [PyTorch Lightning](https://www.pytorchlightning.ai/index.html)
and the resulting checkpoint was converted to this 🤗 model with a library-specific [conversion utility](#checkpoint-conversion).
## Intended use and limitations
This model can be used for audio generation with a user-defined initial number of latent tokens. It mainly exists for
demonstration purposes on how to train Perceiver AR models with the [perceiver-io library](https://github.com/krasserm/perceiver-io).
To improve on the quality of the generated audio samples a much larger dataset than
[GiantMIDI-Piano](https://github.com/bytedance/GiantMIDI-Piano) is required for training.
## Usage examples
To use this model you first need to [install](https://github.com/krasserm/perceiver-io/blob/main/README.md#installation)
the `perceiver-io` library with extension `audio`.
```shell
pip install perceiver-io[audio]
```
Then the model can be used with PyTorch. Either use the model directly to generate MIDI files:
```python
import torch
from perceiver.model.audio.symbolic import PerceiverSymbolicAudioModel
from perceiver.data.audio.midi_processor import decode_midi, encode_midi
from pretty_midi import PrettyMIDI
repo_id = "krasserm/perceiver-ar-sam-giant-midi"
model = PerceiverSymbolicAudioModel.from_pretrained(repo_id)
prompt_midi = PrettyMIDI("prompt.mid")
prompt = torch.tensor(encode_midi(prompt_midi)).unsqueeze(0)
output = model.generate(prompt, max_new_tokens=64, num_latents=1, do_sample=True, top_p=0.95, temperature=1.0)
output_midi = decode_midi(output[0].cpu().numpy())
type(output_midi)
```
```
pretty_midi.pretty_midi.PrettyMIDI
```
use a `symbolic-audio-generation` pipeline to generate a MIDI output:
```python
from transformers import pipeline
from pretty_midi import PrettyMIDI
from perceiver.model.audio import symbolic # auto-class registration
repo_id = "krasserm/perceiver-ar-sam-giant-midi"
prompt = PrettyMIDI("prompt.mid")
audio_generator = pipeline("symbolic-audio-generation", model=repo_id)
output = audio_generator(prompt, max_new_tokens=64, num_latents=1, do_sample=True, top_p=0.95, temperature=1.0)
type(output["generated_audio_midi"])
```
```
pretty_midi.pretty_midi.PrettyMIDI
```
or generate WAV output by rendering the MIDI symbols using [fluidsynth](https://www.fluidsynth.org/) (Note: fluidsynth must be installed
in order for the following example to work):
```python
from transformers import pipeline
from pretty_midi import PrettyMIDI
from perceiver.model.audio import symbolic # auto-class registration
repo_id = "krasserm/perceiver-ar-sam-giant-midi"
prompt = PrettyMIDI("prompt.mid")
audio_generator = pipeline("symbolic-audio-generation", model=repo_id)
output = audio_generator(prompt, max_new_tokens=64, num_latents=1, do_sample=True, top_p=0.95, temperature=1.0, render=True)
with open("generated_audio.wav", "wb") as f:
f.write(output["generated_audio_wav"])
```
## Audio samples
The following (hand-picked) audio samples were generated using various prompts from the validation subset of
the [GiantMIDI-Piano](https://github.com/bytedance/GiantMIDI-Piano) dataset. The input prompts are
not included in the audio output.
<table>
<tr>
<th>Audio sample</th>
<th>Top-K</th>
<th>Top-p</th>
<th>Temperature</th>
<th>Prefix length</th>
<th>Latents</th>
</tr>
<tr>
<td>
<audio controls>
<source src="https://martin-krasser.com/perceiver/data/midi/01_nehrlich_continuation.wav" type="audio/wav">
Your browser does not support the audio element.
</audio>
</td>
<td style="vertical-align: top;">-</td>
<td style="vertical-align: top;">0.95</td>
<td style="vertical-align: top;">0.95</td>
<td style="vertical-align: top;">4096</td>
<td style="vertical-align: top;">1</td>
</tr>
<tr>
<td>
<audio controls>
<source src="https://martin-krasser.com/perceiver/data/midi/02_eduardo_continuation.wav" type="audio/wav">
Your browser does not support the audio element.
</audio>
</td>
<td style="vertical-align: top;">-</td>
<td style="vertical-align: top;">0.95</td>
<td style="vertical-align: top;">1.0</td>
<td style="vertical-align: top;">4096</td>
<td style="vertical-align: top;">64</td>
</tr>
<tr>
<td>
<audio controls>
<source src="https://martin-krasser.com/perceiver/data/midi/03_membree_continuation.wav" type="audio/wav">
Your browser does not support the audio element.
</audio>
</td>
<td style="vertical-align: top;">-</td>
<td style="vertical-align: top;">0.95</td>
<td style="vertical-align: top;">1.0</td>
<td style="vertical-align: top;">1024</td>
<td style="vertical-align: top;">1</td>
</tr>
<tr>
<td>
<audio controls>
<source src="https://martin-krasser.com/perceiver/data/midi/04_membree_continuation.wav" type="audio/wav">
Your browser does not support the audio element.
</audio>
</td>
<td style="vertical-align: top;">15</td>
<td style="vertical-align: top;">-</td>
<td style="vertical-align: top;">1.0</td>
<td style="vertical-align: top;">4096</td>
<td style="vertical-align: top;">16</td>
</tr>
<tr>
<td>
<audio controls>
<source src="https://martin-krasser.com/perceiver/data/midi/05_kinscella_continuation.wav" type="audio/wav">
Your browser does not support the audio element.
</audio>
</td>
<td style="vertical-align: top;">-</td>
<td style="vertical-align: top;">0.95</td>
<td style="vertical-align: top;">1.0</td>
<td style="vertical-align: top;">4096</td>
<td style="vertical-align: top;">1</td>
</tr>
</table>
## Checkpoint conversion
The `krasserm/perceiver-ar-sam-giant-midi` model has been created from a training checkpoint with:
```python
from perceiver.model.audio.symbolic import convert_checkpoint
convert_checkpoint(
save_dir="krasserm/perceiver-ar-sam-giant-midi",
ckpt_url="https://martin-krasser.com/perceiver/logs-0.8.0/sam/version_1/checkpoints/epoch=027-val_loss=1.944.ckpt",
push_to_hub=True,
)
```
## Citation
```bibtex
@inproceedings{hawthorne2022general,
title={General-purpose, long-context autoregressive modeling with perceiver ar},
author={Hawthorne, Curtis and Jaegle, Andrew and Cangea, C{\u{a}}t{\u{a}}lina and Borgeaud, Sebastian and Nash, Charlie and Malinowski, Mateusz and Dieleman, Sander and Vinyals, Oriol and Botvinick, Matthew and Simon, Ian and others},
booktitle={International Conference on Machine Learning},
pages={8535--8558},
year={2022},
organization={PMLR}
}
```
|
sd-dreambooth-library/magikarp-pokemon
|
sd-dreambooth-library
| 2023-05-05T15:49:09Z | 34 | 2 |
diffusers
|
[
"diffusers",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-23T21:12:46Z |
---
license: mit
---
### Magikarp pokemon on Stable Diffusion via Dreambooth
#### model by BenjiKan
This your the Stable Diffusion model fine-tuned the Magikarp pokemon concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks pokemon**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:





|
jasonsurya0/BART_SIX
|
jasonsurya0
| 2023-05-05T15:49:02Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-05T15:28:44Z |
BART MODEL #6 PRETRAINED ON XSUM AND FINETUNED ON SAMSUM
|
mbshaikh/videomae-base-finetuned-ucf101-subset
|
mbshaikh
| 2023-05-05T15:33:34Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-05-05T14:48:13Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0024
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0171 | 0.25 | 75 | 0.0063 | 1.0 |
| 0.0391 | 1.25 | 150 | 0.0048 | 1.0 |
| 0.0019 | 2.25 | 225 | 0.0032 | 1.0 |
| 0.1704 | 3.25 | 300 | 0.0024 | 1.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
huggingtweets/mildlysomewhat
|
huggingtweets
| 2023-05-05T15:32:30Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T15:32:23Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1559746526234263558/n8RqkkaD_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">☆♪sheeks☆彡</div>
<div style="text-align: center; font-size: 14px;">@mildlysomewhat</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ☆♪sheeks☆彡.
| Data | ☆♪sheeks☆彡 |
| --- | --- |
| Tweets downloaded | 705 |
| Retweets | 207 |
| Short tweets | 32 |
| Tweets kept | 466 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zuthsw7a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mildlysomewhat's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2kq2dk2t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2kq2dk2t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mildlysomewhat')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
pcuenq/k-upscaler
|
pcuenq
| 2023-05-05T15:27:49Z | 6 | 1 |
diffusers
|
[
"diffusers",
"diffusers:StableDiffusionUpscalerPipeline",
"region:us"
] | null | 2022-11-14T17:47:59Z |
# k-diffusion latent upscaler
By Katherine Crowson.
This is a temporary model repository that I created for convenience.
I just copied the weights and configuration from this base URL: https://models.rivershavewings.workers.dev/
|
AkshatSurolia/ICD-10-Code-Prediction
|
AkshatSurolia
| 2023-05-05T15:24:14Z | 977 | 45 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- text-classification
---
# Clinical BERT for ICD-10 Prediction
The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries.
---
## How to use the model
Load the model via the transformers library:
from transformers import AutoTokenizer, BertForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
model = BertForSequenceClassification.from_pretrained("AkshatSurolia/ICD-10-Code-Prediction")
config = model.config
Run the model with clinical diagonosis text:
text = "subarachnoid hemorrhage scalp laceration service: surgery major surgical or invasive"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
Return the Top-5 predicted ICD-10 codes:
results = output.logits.detach().cpu().numpy()[0].argsort()[::-1][:5]
return [ config.id2label[ids] for ids in results]
|
AkshatSurolia/ViT-FaceMask-Finetuned
|
AkshatSurolia
| 2023-05-05T15:22:03Z | 293 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"dataset:Face-Mask18K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- image-classification
datasets:
- Face-Mask18K
---
# Vision Transformer (ViT) for Face Mask Detection
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al.
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Training Metrics
epoch = 0.89
total_flos = 923776502GF
train_loss = 0.057
train_runtime = 0:40:10.40
train_samples_per_second = 23.943
train_steps_per_second = 1.497
---
## Evaluation Metrics
epoch = 0.89
eval_accuracy = 0.9894
eval_loss = 0.0395
eval_runtime = 0:00:36.81
eval_samples_per_second = 97.685
eval_steps_per_second = 12.224
|
gsaivinay/OpenAssistant-SFT-7-Llama-30B-HF
|
gsaivinay
| 2023-05-05T15:17:18Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2304.07327",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T15:17:17Z |
---
license: other
duplicated_from: TheBloke/OpenAssistant-SFT-7-Llama-30B-HF
---
# OpenAssistant LLaMA 30B SFT 7 HF
This in HF format repo of [OpenAssistant's LLaMA 30B SFT 7](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor).
It is the result of merging the XORs from the above repo with the original Llama 30B weights.
This is epoch 7 of OpenAssistant's training of a Llama 30B model.
# Original model card
```
llama-30b-sft-7:
dtype: fp16
log_dir: "llama_log_30b"
learning_rate: 1e-5
model_name: /home/ubuntu/Open-Assistant/model/model_training/.saved/llama-30b-super-pretrain/checkpoint-3500
#model_name: OpenAssistant/llama-30b-super-pretrain
output_dir: llama_model_30b
deepspeed_config: configs/zero3_config_sft.json
weight_decay: 0.0
residual_dropout: 0.0
max_length: 2048
use_flash_attention: true
warmup_steps: 20
gradient_checkpointing: true
gradient_accumulation_steps: 12
per_device_train_batch_size: 2
per_device_eval_batch_size: 3
eval_steps: 101
save_steps: 485
num_train_epochs: 4
save_total_limit: 3
use_custom_sampler: true
sort_by_length: false
#save_strategy: steps
save_strategy: epoch
datasets:
- oasst_export:
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk"
input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz
val_split: 0.05
- vicuna:
val_split: 0.05
max_val_set: 800
fraction: 1.0
- dolly15k:
val_split: 0.05
max_val_set: 300
- grade_school_math_instructions:
val_split: 0.05
- code_alpaca:
val_split: 0.05
max_val_set: 250
```
- **OASST dataset paper:** https://arxiv.org/abs/2304.07327
|
jasonsurya0/BART_FIVE
|
jasonsurya0
| 2023-05-05T15:05:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-05T14:36:24Z |
BART MODEL #5 PRETRAINED ON XSUM AND FINETUNED ON SAMSUM
|
ameyachitnis/ppo-Huggy
|
ameyachitnis
| 2023-05-05T15:00:33Z | 15 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-05T09:54:27Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: ameyachitnis/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hsyntemiz/turcoins-classifier
|
hsyntemiz
| 2023-05-05T14:57:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-05-05T12:06:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: turcoins-classifier
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: hsyntemiz--turcoins
split: test
args: hsyntemiz--turcoins
metrics:
- name: Accuracy
type: accuracy
value: 0.9548611111111112
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# turcoins-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1763
- Accuracy: 0.9549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9277 | 1.0 | 146 | 1.9660 | 0.7726 |
| 1.6627 | 2.0 | 292 | 1.7154 | 0.7917 |
| 1.4071 | 2.99 | 438 | 1.4120 | 0.8079 |
| 1.09 | 4.0 | 585 | 1.1225 | 0.8362 |
| 0.8086 | 5.0 | 731 | 0.8917 | 0.8675 |
| 0.7636 | 6.0 | 877 | 0.7596 | 0.8709 |
| 0.611 | 6.99 | 1023 | 0.6493 | 0.8883 |
| 0.4605 | 8.0 | 1170 | 0.5899 | 0.8872 |
| 0.37 | 9.0 | 1316 | 0.4978 | 0.9045 |
| 0.3882 | 10.0 | 1462 | 0.4424 | 0.9132 |
| 0.3139 | 10.99 | 1608 | 0.3969 | 0.9115 |
| 0.3178 | 12.0 | 1755 | 0.3525 | 0.9294 |
| 0.2796 | 13.0 | 1901 | 0.3552 | 0.9161 |
| 0.2571 | 14.0 | 2047 | 0.3189 | 0.9265 |
| 0.2481 | 14.99 | 2193 | 0.2945 | 0.9358 |
| 0.1875 | 16.0 | 2340 | 0.2647 | 0.9392 |
| 0.1861 | 17.0 | 2486 | 0.2404 | 0.9410 |
| 0.1839 | 18.0 | 2632 | 0.2556 | 0.9421 |
| 0.173 | 18.99 | 2778 | 0.2387 | 0.9462 |
| 0.1837 | 20.0 | 2925 | 0.2049 | 0.9485 |
| 0.1724 | 21.0 | 3071 | 0.2065 | 0.9525 |
| 0.1399 | 22.0 | 3217 | 0.2089 | 0.9404 |
| 0.1696 | 22.99 | 3363 | 0.1957 | 0.9497 |
| 0.1405 | 24.0 | 3510 | 0.1848 | 0.9554 |
| 0.1009 | 25.0 | 3656 | 0.1912 | 0.9520 |
| 0.1126 | 26.0 | 3802 | 0.1717 | 0.9560 |
| 0.1336 | 26.99 | 3948 | 0.1699 | 0.9589 |
| 0.1046 | 28.0 | 4095 | 0.1600 | 0.9601 |
| 0.126 | 29.0 | 4241 | 0.1839 | 0.9520 |
| 0.0882 | 29.95 | 4380 | 0.1763 | 0.9549 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
SharKRippeR/xlm-roberta-base-finetuned-panx-it
|
SharKRippeR
| 2023-05-05T14:56:02Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-05T14:52:11Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8149959250203749
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2579
- F1: 0.8150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9299 | 1.0 | 53 | 0.3701 | 0.7122 |
| 0.313 | 2.0 | 106 | 0.2874 | 0.7881 |
| 0.2174 | 3.0 | 159 | 0.2579 | 0.8150 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
SanketJadhav/PlantDiseaseClassifier
|
SanketJadhav
| 2023-05-05T14:47:43Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-03-22T14:59:49Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
SharKRippeR/xlm-roberta-base-finetuned-panx-de-fr
|
SharKRippeR
| 2023-05-05T14:40:45Z | 136 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-05T14:27:26Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1624
- F1: 0.8577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2988 | 1.0 | 537 | 0.1743 | 0.8300 |
| 0.1472 | 2.0 | 1074 | 0.1616 | 0.8470 |
| 0.0987 | 3.0 | 1611 | 0.1624 | 0.8577 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
marianodo/MegaBatchMarginLoss-light-model-10
|
marianodo
| 2023-05-05T14:36:13Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-05T14:35:49Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 16 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MegaBatchMarginLoss.MegaBatchMarginLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 16,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
thejuuuuu/torgo_b4_2
|
thejuuuuu
| 2023-05-05T14:33:34Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-05T14:32:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: torgo_b4_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_b4_2
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 80
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
marianodo/MegaBatchMarginLoss-10
|
marianodo
| 2023-05-05T14:28:48Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-05T14:27:55Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 16 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MegaBatchMarginLoss.MegaBatchMarginLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 16,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
GeneralAwareness/MirrorMask
|
GeneralAwareness
| 2023-05-05T14:25:59Z | 16 | 5 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-23T08:48:54Z |
---
license: cc-by-nc-sa-4.0
language:
- en
thumbnail: "https://huggingface.co/GeneralAwareness/Mirrormask/resolve/main/mm1.png"
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
---
### Mirror Mask
Welcome to MirrorMask - This is a fine-tuned Stable Diffusion model trained from the 2005 dark fantasy film MirrorMask.
Use the token mirrormask in your prompts for the style to show. Examples down below:

#### Prompt
An example of calling mirrormask before the prompt (mirrormask a woman)

An example of calling mirrormask with a comma before the prompt (mirrormask, a woman)

An example of calling mirrormask with the additional word of "by" before the prompt (mirrormask by a woman)

An example of calling mirrormask with the additional word of "in" before the prompt (mirrormask in a woman)

An example of calling the prompt then mirrormask (a woman mirrormask)

An example of calling the prompt then adding a comma before mirrormask (a woman, mirrormask)

An example of calling the prompt then adding the word "in" before mirrormask (a woman in mirrormask)

An example of calling the prompt then adding the word "by" before mirrormask (a woman by mirrormask)

For more variety try adding the word "style" after the token word mirrormask (no examples given to save space).
### 🧨 Diffusers
This model can be used just as you would any other Stable Diffusion style model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "GeneralAwareness/MirrorMask"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a woman by mirrormask"
image = pipe(prompt).images[0]
image.save("./awoman.png")
```
## License
This model is under a creative commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
To see what rights you have under this licence follow this link - https://creativecommons.org/licenses/by-nc-sa/4.0/
|
cornut/a2c-PandaReachDense-v2
|
cornut
| 2023-05-05T14:24:58Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-05T14:21:56Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.66 +/- 0.47
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
rozek/OpenLLaMA_7B_300BT_q4
|
rozek
| 2023-05-05T14:23:02Z | 0 | 1 | null |
[
"open-llama",
"ggml",
"7b",
"quantized",
"text-generation",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-05-05T13:17:46Z |
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
tags:
- open-llama
- ggml
- 7b
- quantized
- text-generation
---
This repository contains the [300BT preview](https://huggingface.co/openlm-research/open_llama_7b_preview_300bt) of the
[Open LLaMA](https://github.com/openlm-research/open_llama) model based on the
[RedPajama 1T dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T),
converted to GGML and quantized to 4 bit, ready to be used with [llama.cpp](https://github.com/rozek/llama.cpp)
## Usage ##
In order to use this model with llama.cpp
* install llama.cpp as [described in the docs](https://github.com/ggerganov/llama.cpp#usage)
* download this model
* move it into the `models` subfolder of llama.cpp
* run inferences with the additional parameter `-m ./models/7B/ggml-openllama-7b-300bt-q4_0.bin`
For example,
```
./main -m ./models/7B/ggml-openllama-7b-300bt-q4_0.bin -n 128 -p "who was Joseph Weizenbaum?"
```
## License ##
Since the underlying model is itself Apache 2.0 licensed, this repo is as well.
---
license: apache-2.0
---
|
Bainbridge/gpt2-kl_01_03_hscnspecial-hs_cn
|
Bainbridge
| 2023-05-05T14:11:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-05T13:20:34Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-kl_01_03_hscnspecial-hs_cn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-kl_01_03_hscnspecial-hs_cn
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 21
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 73.5326 | 0.02 | 10 | 69.5739 |
| 46.0984 | 0.04 | 20 | 32.9644 |
| 13.5448 | 0.06 | 30 | 10.6411 |
| 6.7693 | 0.08 | 40 | 4.2565 |
| 2.6755 | 0.1 | 50 | 2.0724 |
| 1.4407 | 0.12 | 60 | 1.1299 |
| 1.082 | 0.14 | 70 | 0.9249 |
| 0.8847 | 0.16 | 80 | 0.7332 |
| 0.7248 | 0.18 | 90 | 0.6492 |
| 0.7073 | 0.2 | 100 | 0.6516 |
| 0.6199 | 0.22 | 110 | 0.6608 |
| 0.6244 | 0.24 | 120 | 0.6069 |
| 0.6289 | 0.26 | 130 | 0.6602 |
| 0.6198 | 0.28 | 140 | 0.5919 |
| 0.5988 | 0.3 | 150 | 0.6068 |
| 0.6156 | 0.32 | 160 | 0.5664 |
| 0.5252 | 0.34 | 170 | 0.5626 |
| 0.6351 | 0.36 | 180 | 0.5663 |
| 0.6246 | 0.38 | 190 | 0.5599 |
| 0.5791 | 0.4 | 200 | 0.5609 |
| 0.6755 | 0.42 | 210 | 0.5627 |
| 0.6367 | 0.44 | 220 | 0.5608 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 1.12.0a0+bd13bc6
- Datasets 2.12.0
- Tokenizers 0.13.3
|
tomofi/trocr-captcha
|
tomofi
| 2023-05-05T13:59:12Z | 613 | 5 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-03-11T12:04:59Z |
---
license: mit
---
CER: 0.0019
training code
https://colab.research.google.com/drive/14MfFkhgPS63RJcP7rpBOK6OII_y34jx_?usp=sharing
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.