modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 00:41:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 00:41:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_replace_tfidf-2
|
ThuyNT03
| 2023-09-05T02:04:06Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T23:19:26Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_replace_tfidf-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_replace_tfidf-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8468
- Accuracy: 0.69
- F1: 0.6959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0982 | 1.0 | 87 | 0.9995 | 0.47 | 0.4137 |
| 0.8884 | 2.0 | 174 | 0.7521 | 0.65 | 0.6032 |
| 0.7533 | 3.0 | 261 | 0.7130 | 0.64 | 0.6364 |
| 0.6259 | 4.0 | 348 | 0.7598 | 0.68 | 0.6865 |
| 0.5278 | 5.0 | 435 | 0.7066 | 0.7 | 0.7053 |
| 0.4336 | 6.0 | 522 | 0.7901 | 0.7 | 0.7060 |
| 0.3516 | 7.0 | 609 | 0.8106 | 0.69 | 0.6976 |
| 0.2859 | 8.0 | 696 | 0.8468 | 0.69 | 0.6959 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
mabrouk/speecht5_finetuned_voxpopuli_nl
|
mabrouk
| 2023-09-05T01:57:13Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-09-04T23:31:36Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.521 | 4.3 | 1000 | 0.4820 |
| 0.4972 | 8.61 | 2000 | 0.4676 |
| 0.4963 | 12.91 | 3000 | 0.4645 |
| 0.4919 | 17.21 | 4000 | 0.4622 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_replace_w2v-2
|
ThuyNT03
| 2023-09-05T01:54:32Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T23:09:25Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_replace_w2v-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_replace_w2v-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9125
- Accuracy: 0.71
- F1: 0.7091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.101 | 1.0 | 84 | 1.0494 | 0.46 | 0.3728 |
| 0.9323 | 2.0 | 168 | 0.7962 | 0.59 | 0.5689 |
| 0.7109 | 3.0 | 252 | 0.7447 | 0.71 | 0.7004 |
| 0.587 | 4.0 | 336 | 0.7251 | 0.71 | 0.7104 |
| 0.4611 | 5.0 | 420 | 0.8001 | 0.68 | 0.6770 |
| 0.3668 | 6.0 | 504 | 0.8589 | 0.72 | 0.7229 |
| 0.291 | 7.0 | 588 | 0.8900 | 0.69 | 0.6894 |
| 0.2505 | 8.0 | 672 | 0.9125 | 0.71 | 0.7091 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
thissayantan/dreambooth-sayantan
|
thissayantan
| 2023-09-05T01:48:42Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-05T01:48:40Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of sayantan person
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_BERT-2
|
ThuyNT03
| 2023-09-05T01:32:37Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T22:51:06Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_BERT-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_BERT-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1237
- Accuracy: 0.71
- F1: 0.7165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0509 | 1.0 | 87 | 0.8383 | 0.59 | 0.5441 |
| 0.7214 | 2.0 | 174 | 0.7218 | 0.72 | 0.72 |
| 0.5758 | 3.0 | 261 | 0.7535 | 0.69 | 0.6956 |
| 0.4321 | 4.0 | 348 | 0.7413 | 0.73 | 0.7360 |
| 0.3364 | 5.0 | 435 | 0.8328 | 0.72 | 0.7269 |
| 0.2712 | 6.0 | 522 | 0.9267 | 0.72 | 0.7255 |
| 0.1902 | 7.0 | 609 | 1.0811 | 0.7 | 0.7074 |
| 0.1351 | 8.0 | 696 | 1.1237 | 0.71 | 0.7165 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
duwuonline/my-translation-helsinki2
|
duwuonline
| 2023-09-05T01:27:43Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:duwuonline/my-translation-helsinki",
"base_model:finetune:duwuonline/my-translation-helsinki",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-05T01:00:23Z |
---
license: apache-2.0
base_model: duwuonline/my-translation-helsinki
tags:
- generated_from_trainer
model-index:
- name: my-translation-helsinki2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-translation-helsinki2
This model is a fine-tuned version of [duwuonline/my-translation-helsinki](https://huggingface.co/duwuonline/my-translation-helsinki) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ckandemir/bert-base-uncased-issues-128
|
ckandemir
| 2023-09-05T01:26:28Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-04T19:45:22Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0966 | 1.0 | 291 | 1.6190 |
| 1.6197 | 2.0 | 582 | 1.5317 |
| 1.485 | 3.0 | 873 | 1.4164 |
| 1.3992 | 4.0 | 1164 | 1.4064 |
| 1.3219 | 5.0 | 1455 | 1.3900 |
| 1.2851 | 6.0 | 1746 | 1.2096 |
| 1.2328 | 7.0 | 2037 | 1.3019 |
| 1.2113 | 8.0 | 2328 | 1.2779 |
| 1.1674 | 9.0 | 2619 | 1.2312 |
| 1.1443 | 10.0 | 2910 | 1.1830 |
| 1.1171 | 11.0 | 3201 | 1.1692 |
| 1.1067 | 12.0 | 3492 | 1.2364 |
| 1.0846 | 13.0 | 3783 | 1.1871 |
| 1.0815 | 14.0 | 4074 | 1.1354 |
| 1.054 | 15.0 | 4365 | 1.1771 |
| 1.0565 | 16.0 | 4656 | 1.2137 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
CzarnyRycerz/Reinforce-pixelcopter-1
|
CzarnyRycerz
| 2023-09-05T01:23:36Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T23:41:34Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 32.50 +/- 24.74
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JasonTheDeveloper/squad-bloom-3b
|
JasonTheDeveloper
| 2023-09-05T01:21:55Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T01:21:53Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
RafaelMayer/electra-copec-2
|
RafaelMayer
| 2023-09-05T01:13:38Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"electra",
"text-classification",
"generated_from_keras_callback",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T01:12:31Z |
---
base_model: mrm8488/electricidad-base-discriminator
tags:
- generated_from_keras_callback
model-index:
- name: RafaelMayer/electra-copec-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RafaelMayer/electra-copec-2
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7303
- Validation Loss: 0.6874
- Train Accuracy: 0.8824
- Train Precision: [0.75 0.92307692]
- Train Precision W: 0.8824
- Train Recall: [0.75 0.92307692]
- Train Recall W: 0.8824
- Train F1: [0.75 0.92307692]
- Train F1 W: 0.8824
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 5, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch |
|:----------:|:---------------:|:--------------:|:-----------------------:|:-----------------:|:-----------------------:|:--------------:|:-----------------------:|:----------:|:-----:|
| 0.7303 | 0.6874 | 0.8824 | [0.75 0.92307692] | 0.8824 | [0.75 0.92307692] | 0.8824 | [0.75 0.92307692] | 0.8824 | 1 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_w2v-2
|
ThuyNT03
| 2023-09-05T01:13:16Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T22:32:59Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_w2v-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_w2v-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1138
- Accuracy: 0.75
- F1: 0.7539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0576 | 1.0 | 85 | 0.8693 | 0.6 | 0.5283 |
| 0.7822 | 2.0 | 170 | 0.8331 | 0.69 | 0.6665 |
| 0.6156 | 3.0 | 255 | 0.7210 | 0.72 | 0.7194 |
| 0.4447 | 4.0 | 340 | 0.8139 | 0.66 | 0.6645 |
| 0.3252 | 5.0 | 425 | 0.9348 | 0.67 | 0.6776 |
| 0.2105 | 6.0 | 510 | 0.9185 | 0.77 | 0.7718 |
| 0.1437 | 7.0 | 595 | 1.0530 | 0.75 | 0.7539 |
| 0.1479 | 8.0 | 680 | 1.1138 | 0.75 | 0.7539 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
markmp/marketmail
|
markmp
| 2023-09-05T01:12:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T01:12:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
schrilax/marketing_email
|
schrilax
| 2023-09-05T01:10:46Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-09-05T00:43:50Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MichelNivard/codellama_Rbase_instr
|
MichelNivard
| 2023-09-05T01:08:30Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-01T10:24:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
jschew39/marketmail
|
jschew39
| 2023-09-05T01:07:57Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T01:07:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
RafaelMayer/roberta-copec-2
|
RafaelMayer
| 2023-09-05T01:06:47Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"base_model:PlanTL-GOB-ES/roberta-base-bne",
"base_model:finetune:PlanTL-GOB-ES/roberta-base-bne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T01:05:40Z |
---
license: apache-2.0
base_model: PlanTL-GOB-ES/roberta-base-bne
tags:
- generated_from_keras_callback
model-index:
- name: RafaelMayer/roberta-copec-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RafaelMayer/roberta-copec-2
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6476
- Validation Loss: 0.6356
- Train Accuracy: 0.7647
- Train Precision: [0. 0.76470588]
- Train Precision W: 0.5848
- Train Recall: [0. 1.]
- Train Recall W: 0.7647
- Train F1: [0. 0.86666667]
- Train F1 W: 0.6627
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 5, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch |
|:----------:|:---------------:|:--------------:|:-----------------------:|:-----------------:|:------------:|:--------------:|:-----------------------:|:----------:|:-----:|
| 0.6476 | 0.6356 | 0.7647 | [0. 0.76470588] | 0.5848 | [0. 1.] | 0.7647 | [0. 0.86666667] | 0.6627 | 1 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_insert_synonym-2
|
ThuyNT03
| 2023-09-05T01:02:18Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T22:22:38Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_insert_synonym-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_insert_synonym-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3966
- Accuracy: 0.67
- F1: 0.6754
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0222 | 1.0 | 87 | 0.8095 | 0.65 | 0.6380 |
| 0.6487 | 2.0 | 174 | 0.7375 | 0.67 | 0.6640 |
| 0.4554 | 3.0 | 261 | 0.7962 | 0.71 | 0.7084 |
| 0.3194 | 4.0 | 348 | 0.8102 | 0.71 | 0.7161 |
| 0.2303 | 5.0 | 435 | 1.1793 | 0.65 | 0.6607 |
| 0.1728 | 6.0 | 522 | 1.1697 | 0.72 | 0.7245 |
| 0.127 | 7.0 | 609 | 1.3509 | 0.69 | 0.6943 |
| 0.0927 | 8.0 | 696 | 1.3966 | 0.67 | 0.6754 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_backtranslation-2
|
ThuyNT03
| 2023-09-05T00:58:55Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T00:51:23Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_backtranslation-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_backtranslation-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1103
- Accuracy: 0.74
- F1: 0.7315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0442 | 1.0 | 87 | 0.7191 | 0.69 | 0.6652 |
| 0.7545 | 2.0 | 174 | 0.6726 | 0.73 | 0.7264 |
| 0.5743 | 3.0 | 261 | 0.6634 | 0.72 | 0.7157 |
| 0.4342 | 4.0 | 348 | 0.7801 | 0.73 | 0.7270 |
| 0.3244 | 5.0 | 435 | 0.8782 | 0.75 | 0.7438 |
| 0.2421 | 6.0 | 522 | 1.0173 | 0.73 | 0.7235 |
| 0.167 | 7.0 | 609 | 1.0822 | 0.75 | 0.7431 |
| 0.1546 | 8.0 | 696 | 1.1103 | 0.74 | 0.7315 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_VietNam-train-2
|
ThuyNT03
| 2023-09-05T00:50:51Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T22:18:21Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-train-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-train-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8740
- Accuracy: 0.68
- F1: 0.6882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1143 | 1.0 | 44 | 1.0936 | 0.4 | 0.4041 |
| 0.9843 | 2.0 | 88 | 0.8262 | 0.63 | 0.6167 |
| 0.7312 | 3.0 | 132 | 0.7333 | 0.7 | 0.6919 |
| 0.5899 | 4.0 | 176 | 0.8261 | 0.7 | 0.7020 |
| 0.4922 | 5.0 | 220 | 0.7399 | 0.71 | 0.7145 |
| 0.435 | 6.0 | 264 | 0.8382 | 0.64 | 0.6530 |
| 0.375 | 7.0 | 308 | 0.8675 | 0.7 | 0.7047 |
| 0.3161 | 8.0 | 352 | 0.8740 | 0.68 | 0.6882 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
RafaelMayer/electra-copec-1
|
RafaelMayer
| 2023-09-05T00:46:22Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"electra",
"text-classification",
"generated_from_keras_callback",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T00:45:10Z |
---
base_model: mrm8488/electricidad-base-discriminator
tags:
- generated_from_keras_callback
model-index:
- name: RafaelMayer/electra-copec-1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RafaelMayer/electra-copec-1
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7863
- Validation Loss: 0.7271
- Train Accuracy: 0.1765
- Train Precision: [0.17647059 0. ]
- Train Precision W: 0.0311
- Train Recall: [1. 0.]
- Train Recall W: 0.1765
- Train F1: [0.3 0. ]
- Train F1 W: 0.0529
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 5, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch |
|:----------:|:---------------:|:--------------:|:-----------------------:|:-----------------:|:------------:|:--------------:|:---------:|:----------:|:-----:|
| 0.7863 | 0.7271 | 0.1765 | [0.17647059 0. ] | 0.0311 | [1. 0.] | 0.1765 | [0.3 0. ] | 0.0529 | 1 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nbogdan/flant5-base-2ex-elaboration-1epochs
|
nbogdan
| 2023-09-05T00:45:42Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T00:40:36Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-2ex-elaboration-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-2ex-elaboration-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_delete-2
|
ThuyNT03
| 2023-09-05T00:45:20Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T22:10:24Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_delete-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_delete-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8464
- Accuracy: 0.68
- F1: 0.6845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0599 | 1.0 | 87 | 0.9398 | 0.53 | 0.4615 |
| 0.7781 | 2.0 | 174 | 0.7588 | 0.65 | 0.6405 |
| 0.6771 | 3.0 | 261 | 0.7271 | 0.68 | 0.6828 |
| 0.5317 | 4.0 | 348 | 0.6991 | 0.7 | 0.7113 |
| 0.4389 | 5.0 | 435 | 0.6845 | 0.71 | 0.7092 |
| 0.3377 | 6.0 | 522 | 0.8429 | 0.7 | 0.7013 |
| 0.2595 | 7.0 | 609 | 0.8166 | 0.68 | 0.6870 |
| 0.2211 | 8.0 | 696 | 0.8464 | 0.68 | 0.6845 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
actionpace/Chronos-Hermes-v2-13b-Limarp-Lora-Merged
|
actionpace
| 2023-09-05T00:45:05Z | 7 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-02T18:50:44Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* Chronos-Hermes-v2-13b-Limarp-Lora-Merged_Q5_1_4K.gguf
* Chronos-Hermes-v2-13b-Limarp-Lora-Merged_Q5_1_8K.gguf
**Source:** [Doctor-Shotgun](https://huggingface.co/Doctor-Shotgun)
**Source Model:** [Chronos-Hermes-v2-13b-Limarp-Lora-Merged](https://huggingface.co/Doctor-Shotgun/Chronos-Hermes-v2-13b-Limarp-Lora-Merged)
**Source models for Doctor-Shotgun/Chronos-Hermes-v2-13b-Limarp-Lora-Merged (Merge)**
- [Austism/chronos-hermes-13b-v2](https://huggingface.co/Austism/chronos-hermes-13b-v2)
- [lemonilia/limarp-llama2](https://huggingface.co/lemonilia/limarp-llama2) (Lora)
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_replace_tfidf-2
|
ThuyNT03
| 2023-09-05T00:43:19Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T00:35:33Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_replace_tfidf-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_replace_tfidf-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7407
- Accuracy: 0.78
- F1: 0.7740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.085 | 1.0 | 88 | 0.9923 | 0.66 | 0.6391 |
| 0.9033 | 2.0 | 176 | 0.6803 | 0.74 | 0.7342 |
| 0.7906 | 3.0 | 264 | 0.7208 | 0.71 | 0.6992 |
| 0.6859 | 4.0 | 352 | 0.6374 | 0.75 | 0.7483 |
| 0.5591 | 5.0 | 440 | 0.7554 | 0.76 | 0.7539 |
| 0.4588 | 6.0 | 528 | 0.8309 | 0.74 | 0.7337 |
| 0.3967 | 7.0 | 616 | 0.6894 | 0.81 | 0.8063 |
| 0.3339 | 8.0 | 704 | 0.7407 | 0.78 | 0.7740 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
actionpace/Hermes-Kimiko-13B-f16
|
actionpace
| 2023-09-05T00:38:56Z | 11 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-05T00:15:11Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* Hermes-Kimiko-13B-f16_Q5_1_4K.gguf
* Hermes-Kimiko-13B-f16_Q5_1_8K.gguf
**Source:** [Blackroot](https://huggingface.co/Blackroot)
**Source Model:** [Hermes-Kimiko-13B-f16](https://huggingface.co/Blackroot/Hermes-Kimiko-13B-f16)
**Source models for Blackroot/Hermes-Kimiko-13B-f16 (Merge)**
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) ([Ref](https://huggingface.co/actionpace/Nous-Hermes-Llama2-13b))
- [nRuaif/Kimiko_13B](https://huggingface.co/nRuaif/Kimiko_13B) (Lora)
|
actionpace/FrankensteinsMonster-13B
|
actionpace
| 2023-09-05T00:35:39Z | 5 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-05T00:12:20Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* FrankensteinsMonster-13B_Q5_1_4K.gguf
* FrankensteinsMonster-13B_Q5_1_8K.gguf
**Source:** [Blackroot](https://huggingface.co/Blackroot)
**Source Model:** [FrankensteinsMonster-13B](https://huggingface.co/Blackroot/FrankensteinsMonster-13B)
**Source models for Blackroot/FrankensteinsMonster-13B (Merge)**
- [NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) ([Ref](https://huggingface.co/actionpace/Nous-Hermes-Llama2-13b))
- [Blackroot/Llama-2-13B-Storywriter-LORA](https://huggingface.co/Blackroot/Llama-2-13B-Storywriter-LORA) (Lora)
- [lemonilia/limarp-llama2](https://huggingface.co/lemonilia/limarp-llama2) (Lora)
|
monsoon-nlp/nyrkr-joker-llama
|
monsoon-nlp
| 2023-09-05T00:35:39Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"nyc",
"llama2",
"en",
"dataset:jmhessel/newyorker_caption_contest",
"arxiv:2209.06293",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-04T22:33:41Z |
---
license: mit
datasets:
- jmhessel/newyorker_caption_contest
language:
- en
tags:
- nyc
- llama2
widget:
- text: "This scene takes place in the following location: a bank. Three people are standing in line at the bank. The bank teller is a traditional pirate with a hook hand, eye patch, and a parrot. The scene includes: Piracy, Bank teller.\ncaption: Can I interest you in opening an offshore account?\nexplanation of the caption:\n"
example_title: "Training prompt format"
- text: "In this task, you will see a description of an uncanny situation. Then, you will see a joke that was written about the situation. Explain how the joke relates to the situation and why it is funny.\n###\nThis scene takes place in the following location: a bank. Three people are standing in line at the bank. The bank teller is a traditional pirate with a hook hand, eye patch, and a parrot. The scene includes: Piracy, Bank teller.\ncaption: Can I interest you in opening an offshore account?\nexplanation of the caption:\n"
example_title: "Paper prompt format"
- text: "This scene takes place in the following location: a bank. Three people are standing in line at the bank. The bank teller is a traditional pirate with a hook hand, eye patch, and a parrot. The scene includes: Piracy, Bank teller.\ncaption: Can I interest you in opening an offshore account?\nthe caption is funny because"
example_title: "Suggested prompt format"
---
# nyrkr-joker-llama
*New Yorker* cartoon description and caption -> attempt at a joke explanation
Technical details:
- Based on LLaMa2-7b-hf (version 2, 7B params)
- Used [QLoRA](https://github.com/artidoro/qlora/blob/main/qlora.py) to fine-tune on [1.2k rows of New Yorker caption contest](https://huggingface.co/datasets/jmhessel/newyorker_caption_contest)
- Merged LLaMa2 with the adapter weights (from checkpoint step=160, epoch=2.7)
## Prompt options
[The original paper](https://arxiv.org/abs/2209.06293), Figure 10 uses this format for joke explanations:
`In this task, you will see a description of an uncanny situation. Then, you will see a joke that was written about the situation. Explain how the joke relates to the situation and why it is funny.
###
{few-shot examples separated by ###, newline after "explanation of the caption:"}
This scene takes place in the following location: a bank. Three people are standing in line at the bank. The bank teller is a traditional pirate with a hook hand, eye patch, and a parrot. The scene includes: Piracy, Bank teller.
caption: Can I interest you in opening an offshore account?
explanation of the caption:
`
In training, I used just the individual example:
`This scene takes place in the following location: a bank. Three people are standing in line at the bank. The bank teller is a traditional pirate with a hook hand, eye patch, and a parrot. The scene includes: Piracy, Bank teller.
caption: Can I interest you in opening an offshore account?
explanation of the caption:\n`
In inference, I had some better results with a more natural prompt (no newline or space at end)
`This scene takes place in the following location: a bank. Three people are standing in line at the bank. The bank teller is a traditional pirate with a hook hand, eye patch, and a parrot. The scene includes: Piracy, Bank teller.
caption: Can I interest you in opening an offshore account?
the caption is funny because`
## Training script
Trained on a V100
```
git clone https://github.com/artidoro/qlora
cd qlora
pip3 install -r requirements.txt --quiet
! cd qlora && python qlora.py \
--model_name_or_path ../llama-2-7b-hf \
--output_dir ../thatsthejoke \
--logging_steps 20 \
--save_strategy steps \
--data_seed 42 \
--save_steps 80 \
--save_total_limit 10 \
--evaluation_strategy steps \
--max_new_tokens 64 \
--dataloader_num_workers 1 \
--group_by_length \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--lora_r 64 \
--lora_alpha 16 \
--lora_modules all \
--double_quant \
--quant_type nf4 \
--bits 4 \
--warmup_ratio 0.03 \
--lr_scheduler_type constant \
--gradient_checkpointing \
--dataset /content/nycaptions.jsonl \
--dataset_format 'self-instruct' \
--source_max_len 16 \
--target_max_len 512 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 16 \
--max_steps 250 \
--eval_steps 187 \
--learning_rate 0.0002 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--lora_dropout 0.1 \
--weight_decay 0.0 \
--seed 0
```
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_replace_w2v-2
|
ThuyNT03
| 2023-09-05T00:35:25Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T00:27:35Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_replace_w2v-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_replace_w2v-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0103
- Accuracy: 0.75
- F1: 0.7433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0863 | 1.0 | 86 | 0.8715 | 0.59 | 0.5464 |
| 0.8221 | 2.0 | 172 | 0.6132 | 0.72 | 0.7008 |
| 0.6363 | 3.0 | 258 | 0.6041 | 0.72 | 0.7189 |
| 0.5206 | 4.0 | 344 | 0.7012 | 0.73 | 0.7224 |
| 0.3526 | 5.0 | 430 | 0.8181 | 0.75 | 0.7468 |
| 0.2893 | 6.0 | 516 | 0.7950 | 0.77 | 0.7690 |
| 0.2097 | 7.0 | 602 | 0.9751 | 0.74 | 0.7335 |
| 0.1536 | 8.0 | 688 | 1.0103 | 0.75 | 0.7433 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
RafaelMayer/roberta-copec-1
|
RafaelMayer
| 2023-09-05T00:34:46Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"base_model:PlanTL-GOB-ES/roberta-base-bne",
"base_model:finetune:PlanTL-GOB-ES/roberta-base-bne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T00:26:18Z |
---
license: apache-2.0
base_model: PlanTL-GOB-ES/roberta-base-bne
tags:
- generated_from_keras_callback
model-index:
- name: RafaelMayer/roberta-copec-1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RafaelMayer/roberta-copec-1
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6572
- Validation Loss: 0.6316
- Train Accuracy: 0.8235
- Train Precision: [0. 0.82352941]
- Train Precision W: 0.6782
- Train Recall: [0. 1.]
- Train Recall W: 0.8235
- Train F1: [0. 0.90322581]
- Train F1 W: 0.7438
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 5, 'power': 1.0, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Precision W | Train Recall | Train Recall W | Train F1 | Train F1 W | Epoch |
|:----------:|:---------------:|:--------------:|:-----------------------:|:-----------------:|:------------:|:--------------:|:-----------------------:|:----------:|:-----:|
| 0.6572 | 0.6316 | 0.8235 | [0. 0.82352941] | 0.6782 | [0. 1.] | 0.8235 | [0. 0.90322581] | 0.7438 | 1 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
YassineBenlaria/tamasheq-99-2.feature_ext-continue
|
YassineBenlaria
| 2023-09-05T00:34:23Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-03T12:36:34Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: tamasheq-99-2.feature_ext-continue
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tamasheq-99-2.feature_ext-continue
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3689
- Wer: 0.8342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.4856 | 5.71 | 300 | 2.9956 | 0.9974 |
| 2.3903 | 11.43 | 600 | 1.2600 | 0.8816 |
| 0.9577 | 17.14 | 900 | 1.1878 | 0.8342 |
| 0.7051 | 22.86 | 1200 | 1.1907 | 0.8053 |
| 0.5821 | 28.57 | 1500 | 1.2621 | 0.8316 |
| 0.5037 | 34.29 | 1800 | 1.3689 | 0.8342 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
adyprat/Reinforce-pcopv0
|
adyprat
| 2023-09-05T00:22:32Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T21:23:02Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pcopv0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 31.60 +/- 21.11
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_insert_BERT-2
|
ThuyNT03
| 2023-09-05T00:17:38Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T00:09:29Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_insert_BERT-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_insert_BERT-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9737
- Accuracy: 0.72
- F1: 0.7141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0807 | 1.0 | 88 | 0.9024 | 0.64 | 0.6254 |
| 0.8512 | 2.0 | 176 | 0.6824 | 0.75 | 0.7396 |
| 0.7009 | 3.0 | 264 | 0.6368 | 0.74 | 0.7363 |
| 0.5649 | 4.0 | 352 | 0.6994 | 0.76 | 0.7494 |
| 0.458 | 5.0 | 440 | 0.8683 | 0.74 | 0.7300 |
| 0.3409 | 6.0 | 528 | 1.0337 | 0.7 | 0.6787 |
| 0.2964 | 7.0 | 616 | 0.9357 | 0.75 | 0.7459 |
| 0.2305 | 8.0 | 704 | 0.9737 | 0.72 | 0.7141 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
VegaKH/VenusXL
|
VegaKH
| 2023-09-05T00:12:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-11T14:05:29Z |
---
license: creativeml-openrail-m
---
|
nbogdan/flant5-large-2ex-paraphrasing-3epochs
|
nbogdan
| 2023-09-05T00:10:13Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"t5",
"adapterhub:self-explanations",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T00:09:05Z |
---
tags:
- adapter-transformers
- t5
- adapterhub:self-explanations
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-large-2ex-paraphrasing-3epochs` for google/flan-t5-large
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-large")
adapter_name = model.load_adapter("nbogdan/flant5-large-2ex-paraphrasing-3epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
nightdude/config_80034
|
nightdude
| 2023-09-05T00:10:04Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T00:09:49Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0090
|
bigmorning
| 2023-09-04T23:59:07Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T23:58:59Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0090
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0090
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0124
- Train Accuracy: 0.0339
- Train Wermet: 14.3527
- Validation Loss: 0.8265
- Validation Accuracy: 0.0209
- Validation Wermet: 32.3895
- Epoch: 89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
| 0.2009 | 0.0327 | 0.0596 | 0.7723 | 0.0207 | 0.2712 | 40 |
| 0.1750 | 0.0329 | 0.0504 | 0.7629 | 0.0207 | 0.2692 | 41 |
| 0.1510 | 0.0331 | 0.0410 | 0.7650 | 0.0207 | 0.2684 | 42 |
| 0.1319 | 0.0333 | 0.0367 | 0.7533 | 0.0207 | 0.2655 | 43 |
| 0.1121 | 0.0335 | 0.0292 | 0.7589 | 0.0207 | 0.2647 | 44 |
| 0.0956 | 0.0336 | 0.0253 | 0.7579 | 0.0208 | 0.2642 | 45 |
| 0.0812 | 0.0337 | 0.0254 | 0.7584 | 0.0208 | 0.2625 | 46 |
| 0.0694 | 0.0338 | 0.0332 | 0.7555 | 0.0208 | 0.2693 | 47 |
| 0.0592 | 0.0339 | 0.0319 | 0.7534 | 0.0208 | 0.2629 | 48 |
| 0.0499 | 0.0339 | 0.0487 | 0.7587 | 0.0208 | 0.3030 | 49 |
| 0.0409 | 0.0339 | 0.0615 | 0.7577 | 0.0208 | 0.2810 | 50 |
| 0.0347 | 0.0340 | 0.0859 | 0.7603 | 0.0208 | 0.3534 | 51 |
| 0.0286 | 0.0340 | 0.1928 | 0.7554 | 0.0209 | 0.5822 | 52 |
| 0.0267 | 0.0340 | 0.3131 | 0.7664 | 0.0208 | 1.7372 | 53 |
| 0.0243 | 0.0340 | 1.3154 | 0.7525 | 0.0209 | 0.7770 | 54 |
| 0.0206 | 0.0340 | 0.8121 | 0.7532 | 0.0209 | 0.9253 | 55 |
| 0.0174 | 0.0340 | 0.9253 | 0.7574 | 0.0209 | 1.4865 | 56 |
| 0.0135 | 0.0340 | 1.1761 | 0.7592 | 0.0209 | 1.5813 | 57 |
| 0.0111 | 0.0340 | 1.7125 | 0.7631 | 0.0209 | 1.8950 | 58 |
| 0.0096 | 0.0340 | 1.9230 | 0.7664 | 0.0209 | 2.4432 | 59 |
| 0.0082 | 0.0340 | 2.5718 | 0.7693 | 0.0209 | 3.3565 | 60 |
| 0.0073 | 0.0340 | 3.5489 | 0.7747 | 0.0209 | 3.7191 | 61 |
| 0.0063 | 0.0340 | 3.7801 | 0.7756 | 0.0209 | 4.4728 | 62 |
| 0.0054 | 0.0340 | 4.0145 | 0.7795 | 0.0209 | 5.0058 | 63 |
| 0.0048 | 0.0340 | 4.9652 | 0.7821 | 0.0210 | 4.9937 | 64 |
| 0.0042 | 0.0340 | 5.5984 | 0.7914 | 0.0209 | 8.3869 | 65 |
| 0.0205 | 0.0339 | 9.9212 | 0.7811 | 0.0209 | 21.1156 | 66 |
| 0.0184 | 0.0339 | 8.3175 | 0.7619 | 0.0210 | 0.5360 | 67 |
| 0.0080 | 0.0340 | 0.6373 | 0.7554 | 0.0211 | 0.4090 | 68 |
| 0.0052 | 0.0340 | 0.5550 | 0.7528 | 0.0211 | 0.3938 | 69 |
| 0.0038 | 0.0340 | 0.4678 | 0.7551 | 0.0211 | 0.7911 | 70 |
| 0.0032 | 0.0340 | 1.1632 | 0.7617 | 0.0211 | 0.5495 | 71 |
| 0.0028 | 0.0340 | 0.7869 | 0.7643 | 0.0211 | 1.4089 | 72 |
| 0.0025 | 0.0340 | 1.5997 | 0.7681 | 0.0211 | 1.1413 | 73 |
| 0.0023 | 0.0340 | 1.7042 | 0.7719 | 0.0211 | 1.7576 | 74 |
| 0.0021 | 0.0340 | 2.3363 | 0.7750 | 0.0211 | 2.2434 | 75 |
| 0.0019 | 0.0340 | 2.9550 | 0.7777 | 0.0211 | 2.3071 | 76 |
| 0.0017 | 0.0340 | 3.1713 | 0.7831 | 0.0211 | 3.3338 | 77 |
| 0.0015 | 0.0340 | 3.9077 | 0.7852 | 0.0211 | 3.6442 | 78 |
| 0.0014 | 0.0340 | 4.3375 | 0.7900 | 0.0211 | 4.0113 | 79 |
| 0.0013 | 0.0340 | 4.9777 | 0.7946 | 0.0211 | 5.1689 | 80 |
| 0.0011 | 0.0340 | 5.9846 | 0.7968 | 0.0211 | 5.6006 | 81 |
| 0.0010 | 0.0340 | 6.6595 | 0.8033 | 0.0211 | 6.1998 | 82 |
| 0.0009 | 0.0340 | 7.3520 | 0.8058 | 0.0211 | 7.6034 | 83 |
| 0.0008 | 0.0340 | 8.1210 | 0.8138 | 0.0211 | 7.8284 | 84 |
| 0.0007 | 0.0340 | 8.9352 | 0.8170 | 0.0211 | 9.1346 | 85 |
| 0.0006 | 0.0340 | 10.2307 | 0.8185 | 0.0211 | 10.8739 | 86 |
| 0.0006 | 0.0340 | 12.2734 | 0.8245 | 0.0211 | 12.5682 | 87 |
| 0.0005 | 0.0340 | 13.1276 | 0.8314 | 0.0211 | 14.4535 | 88 |
| 0.0124 | 0.0339 | 14.3527 | 0.8265 | 0.0209 | 32.3895 | 89 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
johaanm/test-planner-alpha-V7.0
|
johaanm
| 2023-09-04T23:57:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-04T23:57:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
MattBatchelor/ppo-LunarLander-v2
|
MattBatchelor
| 2023-09-04T23:56:05Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T23:55:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 247.31 +/- 20.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AndrewMarcHarris/ppo-LunarLander-v2
|
AndrewMarcHarris
| 2023-09-04T23:55:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T23:55:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.76 +/- 12.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_insert_synonym-2
|
ThuyNT03
| 2023-09-04T23:53:12Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T23:43:26Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_insert_synonym-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_insert_synonym-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1196
- Accuracy: 0.75
- F1: 0.7413
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.04 | 1.0 | 88 | 0.8053 | 0.64 | 0.6127 |
| 0.7333 | 2.0 | 176 | 0.7600 | 0.71 | 0.7035 |
| 0.5406 | 3.0 | 264 | 0.6719 | 0.71 | 0.7080 |
| 0.4339 | 4.0 | 352 | 0.7426 | 0.75 | 0.7393 |
| 0.3085 | 5.0 | 440 | 0.9125 | 0.73 | 0.6985 |
| 0.23 | 6.0 | 528 | 0.9200 | 0.76 | 0.7527 |
| 0.1612 | 7.0 | 616 | 1.0423 | 0.74 | 0.7314 |
| 0.137 | 8.0 | 704 | 1.1196 | 0.75 | 0.7413 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nbogdan/flant5-base-2ex-paraphrasing-1epochs
|
nbogdan
| 2023-09-04T23:50:00Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T23:49:44Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-2ex-paraphrasing-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-2ex-paraphrasing-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0085
|
bigmorning
| 2023-09-04T23:45:54Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T23:45:45Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0085
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0085
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0008
- Train Accuracy: 0.0340
- Train Wermet: 8.1210
- Validation Loss: 0.8138
- Validation Accuracy: 0.0211
- Validation Wermet: 7.8284
- Epoch: 84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
| 0.2009 | 0.0327 | 0.0596 | 0.7723 | 0.0207 | 0.2712 | 40 |
| 0.1750 | 0.0329 | 0.0504 | 0.7629 | 0.0207 | 0.2692 | 41 |
| 0.1510 | 0.0331 | 0.0410 | 0.7650 | 0.0207 | 0.2684 | 42 |
| 0.1319 | 0.0333 | 0.0367 | 0.7533 | 0.0207 | 0.2655 | 43 |
| 0.1121 | 0.0335 | 0.0292 | 0.7589 | 0.0207 | 0.2647 | 44 |
| 0.0956 | 0.0336 | 0.0253 | 0.7579 | 0.0208 | 0.2642 | 45 |
| 0.0812 | 0.0337 | 0.0254 | 0.7584 | 0.0208 | 0.2625 | 46 |
| 0.0694 | 0.0338 | 0.0332 | 0.7555 | 0.0208 | 0.2693 | 47 |
| 0.0592 | 0.0339 | 0.0319 | 0.7534 | 0.0208 | 0.2629 | 48 |
| 0.0499 | 0.0339 | 0.0487 | 0.7587 | 0.0208 | 0.3030 | 49 |
| 0.0409 | 0.0339 | 0.0615 | 0.7577 | 0.0208 | 0.2810 | 50 |
| 0.0347 | 0.0340 | 0.0859 | 0.7603 | 0.0208 | 0.3534 | 51 |
| 0.0286 | 0.0340 | 0.1928 | 0.7554 | 0.0209 | 0.5822 | 52 |
| 0.0267 | 0.0340 | 0.3131 | 0.7664 | 0.0208 | 1.7372 | 53 |
| 0.0243 | 0.0340 | 1.3154 | 0.7525 | 0.0209 | 0.7770 | 54 |
| 0.0206 | 0.0340 | 0.8121 | 0.7532 | 0.0209 | 0.9253 | 55 |
| 0.0174 | 0.0340 | 0.9253 | 0.7574 | 0.0209 | 1.4865 | 56 |
| 0.0135 | 0.0340 | 1.1761 | 0.7592 | 0.0209 | 1.5813 | 57 |
| 0.0111 | 0.0340 | 1.7125 | 0.7631 | 0.0209 | 1.8950 | 58 |
| 0.0096 | 0.0340 | 1.9230 | 0.7664 | 0.0209 | 2.4432 | 59 |
| 0.0082 | 0.0340 | 2.5718 | 0.7693 | 0.0209 | 3.3565 | 60 |
| 0.0073 | 0.0340 | 3.5489 | 0.7747 | 0.0209 | 3.7191 | 61 |
| 0.0063 | 0.0340 | 3.7801 | 0.7756 | 0.0209 | 4.4728 | 62 |
| 0.0054 | 0.0340 | 4.0145 | 0.7795 | 0.0209 | 5.0058 | 63 |
| 0.0048 | 0.0340 | 4.9652 | 0.7821 | 0.0210 | 4.9937 | 64 |
| 0.0042 | 0.0340 | 5.5984 | 0.7914 | 0.0209 | 8.3869 | 65 |
| 0.0205 | 0.0339 | 9.9212 | 0.7811 | 0.0209 | 21.1156 | 66 |
| 0.0184 | 0.0339 | 8.3175 | 0.7619 | 0.0210 | 0.5360 | 67 |
| 0.0080 | 0.0340 | 0.6373 | 0.7554 | 0.0211 | 0.4090 | 68 |
| 0.0052 | 0.0340 | 0.5550 | 0.7528 | 0.0211 | 0.3938 | 69 |
| 0.0038 | 0.0340 | 0.4678 | 0.7551 | 0.0211 | 0.7911 | 70 |
| 0.0032 | 0.0340 | 1.1632 | 0.7617 | 0.0211 | 0.5495 | 71 |
| 0.0028 | 0.0340 | 0.7869 | 0.7643 | 0.0211 | 1.4089 | 72 |
| 0.0025 | 0.0340 | 1.5997 | 0.7681 | 0.0211 | 1.1413 | 73 |
| 0.0023 | 0.0340 | 1.7042 | 0.7719 | 0.0211 | 1.7576 | 74 |
| 0.0021 | 0.0340 | 2.3363 | 0.7750 | 0.0211 | 2.2434 | 75 |
| 0.0019 | 0.0340 | 2.9550 | 0.7777 | 0.0211 | 2.3071 | 76 |
| 0.0017 | 0.0340 | 3.1713 | 0.7831 | 0.0211 | 3.3338 | 77 |
| 0.0015 | 0.0340 | 3.9077 | 0.7852 | 0.0211 | 3.6442 | 78 |
| 0.0014 | 0.0340 | 4.3375 | 0.7900 | 0.0211 | 4.0113 | 79 |
| 0.0013 | 0.0340 | 4.9777 | 0.7946 | 0.0211 | 5.1689 | 80 |
| 0.0011 | 0.0340 | 5.9846 | 0.7968 | 0.0211 | 5.6006 | 81 |
| 0.0010 | 0.0340 | 6.6595 | 0.8033 | 0.0211 | 6.1998 | 82 |
| 0.0009 | 0.0340 | 7.3520 | 0.8058 | 0.0211 | 7.6034 | 83 |
| 0.0008 | 0.0340 | 8.1210 | 0.8138 | 0.0211 | 7.8284 | 84 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_Mixed-train-2
|
ThuyNT03
| 2023-09-04T23:43:15Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T23:39:26Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-train-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-train-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7200
- Accuracy: 0.77
- F1: 0.7634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1141 | 1.0 | 44 | 1.1019 | 0.31 | 0.2344 |
| 1.0868 | 2.0 | 88 | 1.0677 | 0.44 | 0.3501 |
| 0.9464 | 3.0 | 132 | 0.9689 | 0.56 | 0.5371 |
| 0.7829 | 4.0 | 176 | 0.7724 | 0.67 | 0.6278 |
| 0.678 | 5.0 | 220 | 0.8115 | 0.71 | 0.6960 |
| 0.6379 | 6.0 | 264 | 0.6987 | 0.74 | 0.7313 |
| 0.5801 | 7.0 | 308 | 0.6804 | 0.78 | 0.7765 |
| 0.528 | 8.0 | 352 | 0.7200 | 0.77 | 0.7634 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
StudentLLM/Alpagasus-2-13B-QLoRA
|
StudentLLM
| 2023-09-04T23:34:51Z | 3 | 0 |
peft
|
[
"peft",
"en",
"region:us"
] | null | 2023-08-09T13:08:03Z |
---
library_name: peft
language:
- en
---
# Model Details
Please check our [Github Repository](https://github.com/gauss5930/AlpaGasus2-QLoRA/tree/main)
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
jasonxxr666/lora-trained-xl-colab
|
jasonxxr666
| 2023-09-04T23:34:43Z | 3 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-14T02:28:16Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of paige cat girl
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - jasonxxr666/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of paige cat girl using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
StudentLLM/Alpagasus-2-7B-QLoRA
|
StudentLLM
| 2023-09-04T23:34:07Z | 7 | 0 |
peft
|
[
"peft",
"en",
"region:us"
] | null | 2023-08-09T13:23:45Z |
---
library_name: peft
language:
- en
---
# Model Details
Please check our [Github Repository](https://github.com/gauss5930/AlpaGasus2-QLoRA/tree/main)
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
matgu23/tst
|
matgu23
| 2023-09-04T23:33:04Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-15T02:02:49Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sflr woman
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - matgu23/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sflr woman using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
ThuyNT03/xlm-roberta-base-Final_Mixed-aug_swap-2
|
ThuyNT03
| 2023-09-04T23:31:50Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T23:24:41Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_Mixed-aug_swap-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_Mixed-aug_swap-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9261
- Accuracy: 0.76
- F1: 0.7558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0488 | 1.0 | 87 | 0.8904 | 0.59 | 0.5101 |
| 0.8402 | 2.0 | 174 | 0.8465 | 0.64 | 0.6153 |
| 0.6864 | 3.0 | 261 | 0.7985 | 0.7 | 0.6849 |
| 0.5088 | 4.0 | 348 | 0.7521 | 0.72 | 0.6996 |
| 0.3444 | 5.0 | 435 | 0.7432 | 0.76 | 0.7496 |
| 0.262 | 6.0 | 522 | 0.8831 | 0.75 | 0.7463 |
| 0.1787 | 7.0 | 609 | 0.9219 | 0.75 | 0.7452 |
| 0.1361 | 8.0 | 696 | 0.9261 | 0.76 | 0.7558 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
elami/vit-base-patch16-224-finetuned-flower
|
elami
| 2023-09-04T23:24:28Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-04T23:13:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 2.0.1+cu118
- Datasets 2.7.1
- Tokenizers 0.13.3
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0075
|
bigmorning
| 2023-09-04T23:19:21Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T23:19:14Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0075
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0075
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0023
- Train Accuracy: 0.0340
- Train Wermet: 1.7042
- Validation Loss: 0.7719
- Validation Accuracy: 0.0211
- Validation Wermet: 1.7576
- Epoch: 74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
| 0.2009 | 0.0327 | 0.0596 | 0.7723 | 0.0207 | 0.2712 | 40 |
| 0.1750 | 0.0329 | 0.0504 | 0.7629 | 0.0207 | 0.2692 | 41 |
| 0.1510 | 0.0331 | 0.0410 | 0.7650 | 0.0207 | 0.2684 | 42 |
| 0.1319 | 0.0333 | 0.0367 | 0.7533 | 0.0207 | 0.2655 | 43 |
| 0.1121 | 0.0335 | 0.0292 | 0.7589 | 0.0207 | 0.2647 | 44 |
| 0.0956 | 0.0336 | 0.0253 | 0.7579 | 0.0208 | 0.2642 | 45 |
| 0.0812 | 0.0337 | 0.0254 | 0.7584 | 0.0208 | 0.2625 | 46 |
| 0.0694 | 0.0338 | 0.0332 | 0.7555 | 0.0208 | 0.2693 | 47 |
| 0.0592 | 0.0339 | 0.0319 | 0.7534 | 0.0208 | 0.2629 | 48 |
| 0.0499 | 0.0339 | 0.0487 | 0.7587 | 0.0208 | 0.3030 | 49 |
| 0.0409 | 0.0339 | 0.0615 | 0.7577 | 0.0208 | 0.2810 | 50 |
| 0.0347 | 0.0340 | 0.0859 | 0.7603 | 0.0208 | 0.3534 | 51 |
| 0.0286 | 0.0340 | 0.1928 | 0.7554 | 0.0209 | 0.5822 | 52 |
| 0.0267 | 0.0340 | 0.3131 | 0.7664 | 0.0208 | 1.7372 | 53 |
| 0.0243 | 0.0340 | 1.3154 | 0.7525 | 0.0209 | 0.7770 | 54 |
| 0.0206 | 0.0340 | 0.8121 | 0.7532 | 0.0209 | 0.9253 | 55 |
| 0.0174 | 0.0340 | 0.9253 | 0.7574 | 0.0209 | 1.4865 | 56 |
| 0.0135 | 0.0340 | 1.1761 | 0.7592 | 0.0209 | 1.5813 | 57 |
| 0.0111 | 0.0340 | 1.7125 | 0.7631 | 0.0209 | 1.8950 | 58 |
| 0.0096 | 0.0340 | 1.9230 | 0.7664 | 0.0209 | 2.4432 | 59 |
| 0.0082 | 0.0340 | 2.5718 | 0.7693 | 0.0209 | 3.3565 | 60 |
| 0.0073 | 0.0340 | 3.5489 | 0.7747 | 0.0209 | 3.7191 | 61 |
| 0.0063 | 0.0340 | 3.7801 | 0.7756 | 0.0209 | 4.4728 | 62 |
| 0.0054 | 0.0340 | 4.0145 | 0.7795 | 0.0209 | 5.0058 | 63 |
| 0.0048 | 0.0340 | 4.9652 | 0.7821 | 0.0210 | 4.9937 | 64 |
| 0.0042 | 0.0340 | 5.5984 | 0.7914 | 0.0209 | 8.3869 | 65 |
| 0.0205 | 0.0339 | 9.9212 | 0.7811 | 0.0209 | 21.1156 | 66 |
| 0.0184 | 0.0339 | 8.3175 | 0.7619 | 0.0210 | 0.5360 | 67 |
| 0.0080 | 0.0340 | 0.6373 | 0.7554 | 0.0211 | 0.4090 | 68 |
| 0.0052 | 0.0340 | 0.5550 | 0.7528 | 0.0211 | 0.3938 | 69 |
| 0.0038 | 0.0340 | 0.4678 | 0.7551 | 0.0211 | 0.7911 | 70 |
| 0.0032 | 0.0340 | 1.1632 | 0.7617 | 0.0211 | 0.5495 | 71 |
| 0.0028 | 0.0340 | 0.7869 | 0.7643 | 0.0211 | 1.4089 | 72 |
| 0.0025 | 0.0340 | 1.5997 | 0.7681 | 0.0211 | 1.1413 | 73 |
| 0.0023 | 0.0340 | 1.7042 | 0.7719 | 0.0211 | 1.7576 | 74 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
matsuo-lab/weblab-10b
|
matsuo-lab
| 2023-09-04T23:17:28Z | 1,883 | 63 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-04T04:55:47Z |
---
license: cc-by-nc-4.0
---
# weblab-10b
# Overview
This repository provides a Japanese-centric multilingual GPT-NeoX model of 10 billion parameters.
* **Library**
The model was trained using code based on [EleutherAI/gpt-neox](https://github.com/EleutherAI/gpt-neox).
* **Model architecture**
A 36-layer, 4864-hidden-size transformer-based language model.
* **Pre-training**
The model was trained on around **600B** tokens from a mixture of the following corpora.
- [Japanese C4](https://huggingface.co/datasets/mc4)
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
* **Model Series**
| Variant | Link |
| :-- | :--|
| weblab-10b-instruction-sft | https://huggingface.co/matsuo-lab/weblab-10b-instruction-sft |
| weblab-10b | https://huggingface.co/matsuo-lab/weblab-10b |
* **Authors**
Takeshi Kojima
---
# Benchmarking
* **Japanese benchmark : JGLUE 8-task (2023-08-27)**
- *We used [Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/2f1583c0735eacdfdfa5b7d656074b69577b6774) library for evaluation.*
- *The 8-task average accuracy is based on results of JCommonsenseQA-1.1, JNLI-1.1, MARC-ja-1.1, JSQuAD-1.1, jaqket_v2-0.2, xlsum_ja-1.0, xwinograd_ja, and mgsm-1.0.*
- *model loading is performed with float16, and evaluation is performed with template version 0.3 using the few-shot in-context learning.*
- *The number of few-shots is 3,3,3,2,1,1,0,5.*
- *special_tokens_map.json is modified to avoid errors during the evaluation of the second half benchmarks. As a result, the results of the first half benchmarks became slightly different.*
model | average | jcommonsenseqa | jnli | marc_ja | jsquad | jaqket_v2 | xlsum_ja | xwinograd_ja | mgsm
| :-- | :-- | :-- | :-- | :-- | :-- | :-- | :-- | :-- | :-- |
weblab-10b-instruction-sft | 59.11 | 74.62 | 66.56 | 95.49 | 78.34 | 63.32 | 20.57 | 71.95 | 2
weblab-10b | 50.74 | 66.58 | 53.74 | 82.07 | 62.94 | 56.19 | 10.03 | 71.95 | 2.4
* **Japanese benchmark : JGLUE 4-task (2023-08-18)**
- *We used [Stability-AI/lm-evaluation-harness](https://github.com/Stability-AI/lm-evaluation-harness/tree/2f1583c0735eacdfdfa5b7d656074b69577b6774) library for evaluation.*
- *The 4-task average accuracy is based on results of JCommonsenseQA-1.1, JNLI-1.1, MARC-ja-1.1, and JSQuAD-1.1.*
- *model loading is performed with float16, and evaluation is performed with template version 0.3 using the few-shot in-context learning.*
- *The number of few-shots is 3,3,3,2.*
| Model | Average | JCommonsenseQA | JNLI | MARC-ja | JSQuAD |
| :-- | :-- | :-- | :-- | :-- | :-- |
| weblab-10b-instruction-sft | 78.78 | 74.35 | 65.65 | 96.06 | 79.04 |
| weblab-10b | 66.38 | 65.86 | 54.19 | 84.49 | 60.98 |
---
# How to use the model
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("matsuo-lab/weblab-10b")
model = AutoModelForCausalLM.from_pretrained("matsuo-lab/weblab-10b", torch_dtype=torch.float16)
if torch.cuda.is_available():
model = model.to("cuda")
text = "吾輩は猫である。"
token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=100,
do_sample=True,
temperature=0.7,
top_p=0.95
)
output = tokenizer.decode(output_ids.tolist()[0])
print(output)
~~~~
---
# Licenese
[cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/)
|
FourthBrainGenAI/marketmail
|
FourthBrainGenAI
| 2023-09-04T23:07:40Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-04T23:07:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0070
|
bigmorning
| 2023-09-04T23:06:07Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T23:06:00Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0070
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0070
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0052
- Train Accuracy: 0.0340
- Train Wermet: 0.5550
- Validation Loss: 0.7528
- Validation Accuracy: 0.0211
- Validation Wermet: 0.3938
- Epoch: 69
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
| 0.2009 | 0.0327 | 0.0596 | 0.7723 | 0.0207 | 0.2712 | 40 |
| 0.1750 | 0.0329 | 0.0504 | 0.7629 | 0.0207 | 0.2692 | 41 |
| 0.1510 | 0.0331 | 0.0410 | 0.7650 | 0.0207 | 0.2684 | 42 |
| 0.1319 | 0.0333 | 0.0367 | 0.7533 | 0.0207 | 0.2655 | 43 |
| 0.1121 | 0.0335 | 0.0292 | 0.7589 | 0.0207 | 0.2647 | 44 |
| 0.0956 | 0.0336 | 0.0253 | 0.7579 | 0.0208 | 0.2642 | 45 |
| 0.0812 | 0.0337 | 0.0254 | 0.7584 | 0.0208 | 0.2625 | 46 |
| 0.0694 | 0.0338 | 0.0332 | 0.7555 | 0.0208 | 0.2693 | 47 |
| 0.0592 | 0.0339 | 0.0319 | 0.7534 | 0.0208 | 0.2629 | 48 |
| 0.0499 | 0.0339 | 0.0487 | 0.7587 | 0.0208 | 0.3030 | 49 |
| 0.0409 | 0.0339 | 0.0615 | 0.7577 | 0.0208 | 0.2810 | 50 |
| 0.0347 | 0.0340 | 0.0859 | 0.7603 | 0.0208 | 0.3534 | 51 |
| 0.0286 | 0.0340 | 0.1928 | 0.7554 | 0.0209 | 0.5822 | 52 |
| 0.0267 | 0.0340 | 0.3131 | 0.7664 | 0.0208 | 1.7372 | 53 |
| 0.0243 | 0.0340 | 1.3154 | 0.7525 | 0.0209 | 0.7770 | 54 |
| 0.0206 | 0.0340 | 0.8121 | 0.7532 | 0.0209 | 0.9253 | 55 |
| 0.0174 | 0.0340 | 0.9253 | 0.7574 | 0.0209 | 1.4865 | 56 |
| 0.0135 | 0.0340 | 1.1761 | 0.7592 | 0.0209 | 1.5813 | 57 |
| 0.0111 | 0.0340 | 1.7125 | 0.7631 | 0.0209 | 1.8950 | 58 |
| 0.0096 | 0.0340 | 1.9230 | 0.7664 | 0.0209 | 2.4432 | 59 |
| 0.0082 | 0.0340 | 2.5718 | 0.7693 | 0.0209 | 3.3565 | 60 |
| 0.0073 | 0.0340 | 3.5489 | 0.7747 | 0.0209 | 3.7191 | 61 |
| 0.0063 | 0.0340 | 3.7801 | 0.7756 | 0.0209 | 4.4728 | 62 |
| 0.0054 | 0.0340 | 4.0145 | 0.7795 | 0.0209 | 5.0058 | 63 |
| 0.0048 | 0.0340 | 4.9652 | 0.7821 | 0.0210 | 4.9937 | 64 |
| 0.0042 | 0.0340 | 5.5984 | 0.7914 | 0.0209 | 8.3869 | 65 |
| 0.0205 | 0.0339 | 9.9212 | 0.7811 | 0.0209 | 21.1156 | 66 |
| 0.0184 | 0.0339 | 8.3175 | 0.7619 | 0.0210 | 0.5360 | 67 |
| 0.0080 | 0.0340 | 0.6373 | 0.7554 | 0.0211 | 0.4090 | 68 |
| 0.0052 | 0.0340 | 0.5550 | 0.7528 | 0.0211 | 0.3938 | 69 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
nbogdan/flant5-base-2ex-overall-1epochs
|
nbogdan
| 2023-09-04T22:53:58Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T22:53:49Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-2ex-overall-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-2ex-overall-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
Kapiche/twitter-roberta-base-sentiment-latest
|
Kapiche
| 2023-09-04T22:49:50Z | 286 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:tweet_eval",
"arxiv:2202.03829",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-07T13:23:27Z |
---
language: en
widget:
- text: Covid cases are increasing fast!
datasets:
- tweet_eval
---
# Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2022)
This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark.
The original Twitter-based RoBERTa model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English.
- Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829).
- Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms).
<b>Labels</b>:
0 -> Negative;
1 -> Neutral;
2 -> Positive
This sentiment analysis model has been integrated into [TweetNLP](https://github.com/cardiffnlp/tweetnlp). You can access the demo [here](https://tweetnlp.org).
## Example Pipeline
```python
from transformers import pipeline
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("Covid cases are increasing fast!")
```
```
[{'label': 'Negative', 'score': 0.7236}]
```
## Full classification example
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer, AutoConfig
import numpy as np
from scipy.special import softmax
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
MODEL = f"cardiffnlp/twitter-roberta-base-sentiment-latest"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
config = AutoConfig.from_pretrained(MODEL)
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
#model.save_pretrained(MODEL)
text = "Covid cases are increasing fast!"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Covid cases are increasing fast!"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
# Print labels and scores
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = config.id2label[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
Output:
```
1) Negative 0.7236
2) Neutral 0.2287
3) Positive 0.0477
```
### References
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
title = "{T}weet{NLP}: Cutting-Edge Natural Language Processing for Social Media",
author = "Camacho-collados, Jose and
Rezaee, Kiamehr and
Riahi, Talayeh and
Ushio, Asahi and
Loureiro, Daniel and
Antypas, Dimosthenis and
Boisson, Joanne and
Espinosa Anke, Luis and
Liu, Fangyu and
Mart{\'\i}nez C{\'a}mara, Eugenio" and others,
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2022",
address = "Abu Dhabi, UAE",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-demos.5",
pages = "38--49"
}
```
```
@inproceedings{loureiro-etal-2022-timelms,
title = "{T}ime{LM}s: Diachronic Language Models from {T}witter",
author = "Loureiro, Daniel and
Barbieri, Francesco and
Neves, Leonardo and
Espinosa Anke, Luis and
Camacho-collados, Jose",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-demo.25",
doi = "10.18653/v1/2022.acl-demo.25",
pages = "251--260"
}
```
|
rshei/layoutlmv3-finetuned-cord_100
|
rshei
| 2023-09-04T22:47:01Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord-layoutlmv3",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-29T05:48:50Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: test
args: cord
metrics:
- name: Precision
type: precision
value: 0.9243884358784284
- name: Recall
type: recall
value: 0.9333832335329342
- name: F1
type: f1
value: 0.9288640595903166
- name: Accuracy
type: accuracy
value: 0.9363327674023769
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3467
- Precision: 0.9244
- Recall: 0.9334
- F1: 0.9289
- Accuracy: 0.9363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 4.17 | 250 | 0.5174 | 0.8469 | 0.8735 | 0.8600 | 0.8790 |
| 0.5511 | 8.33 | 500 | 0.3975 | 0.8999 | 0.9147 | 0.9072 | 0.9194 |
| 0.5511 | 12.5 | 750 | 0.3872 | 0.9015 | 0.9184 | 0.9099 | 0.9189 |
| 0.1802 | 16.67 | 1000 | 0.3416 | 0.9180 | 0.9296 | 0.9238 | 0.9338 |
| 0.1802 | 20.83 | 1250 | 0.3311 | 0.9159 | 0.9289 | 0.9223 | 0.9359 |
| 0.0836 | 25.0 | 1500 | 0.3457 | 0.9192 | 0.9281 | 0.9236 | 0.9334 |
| 0.0836 | 29.17 | 1750 | 0.3347 | 0.9202 | 0.9319 | 0.9260 | 0.9291 |
| 0.0473 | 33.33 | 2000 | 0.3677 | 0.9194 | 0.9304 | 0.9249 | 0.9253 |
| 0.0473 | 37.5 | 2250 | 0.3433 | 0.9279 | 0.9341 | 0.9310 | 0.9376 |
| 0.0342 | 41.67 | 2500 | 0.3467 | 0.9244 | 0.9334 | 0.9289 | 0.9363 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_prompt_tuning_500_10_3000_8_e-1_s6789_v4_l4_v20_extra
|
KingKazma
| 2023-09-04T22:40:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-04T22:35:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
kikinamatata/model_2
|
kikinamatata
| 2023-09-04T22:37:23Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-04T19:56:25Z |
---
license: creativeml-openrail-m
base_model: models/model_1
dataset: None
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - kikinamatata/model_2
This pipeline was finetuned from **models/model_1** on the **None** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: None:
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
actionpace/LLaMA2-13B-Holomax
|
actionpace
| 2023-09-04T22:30:15Z | 1 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-04T21:58:05Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* LLaMA2-13B-Holomax_Q5_1_4K.gguf
* LLaMA2-13B-Holomax_Q5_1_8K.gguf
**Source:** [KoboldAI](https://huggingface.co/KoboldAI)
**Source Model:** [LLaMA2-13B-Holomax](https://huggingface.co/KoboldAI/LLaMA2-13B-Holomax)
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0055
|
bigmorning
| 2023-09-04T22:26:20Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T22:26:12Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0055
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0055
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0243
- Train Accuracy: 0.0340
- Train Wermet: 1.3154
- Validation Loss: 0.7525
- Validation Accuracy: 0.0209
- Validation Wermet: 0.7770
- Epoch: 54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
| 0.2009 | 0.0327 | 0.0596 | 0.7723 | 0.0207 | 0.2712 | 40 |
| 0.1750 | 0.0329 | 0.0504 | 0.7629 | 0.0207 | 0.2692 | 41 |
| 0.1510 | 0.0331 | 0.0410 | 0.7650 | 0.0207 | 0.2684 | 42 |
| 0.1319 | 0.0333 | 0.0367 | 0.7533 | 0.0207 | 0.2655 | 43 |
| 0.1121 | 0.0335 | 0.0292 | 0.7589 | 0.0207 | 0.2647 | 44 |
| 0.0956 | 0.0336 | 0.0253 | 0.7579 | 0.0208 | 0.2642 | 45 |
| 0.0812 | 0.0337 | 0.0254 | 0.7584 | 0.0208 | 0.2625 | 46 |
| 0.0694 | 0.0338 | 0.0332 | 0.7555 | 0.0208 | 0.2693 | 47 |
| 0.0592 | 0.0339 | 0.0319 | 0.7534 | 0.0208 | 0.2629 | 48 |
| 0.0499 | 0.0339 | 0.0487 | 0.7587 | 0.0208 | 0.3030 | 49 |
| 0.0409 | 0.0339 | 0.0615 | 0.7577 | 0.0208 | 0.2810 | 50 |
| 0.0347 | 0.0340 | 0.0859 | 0.7603 | 0.0208 | 0.3534 | 51 |
| 0.0286 | 0.0340 | 0.1928 | 0.7554 | 0.0209 | 0.5822 | 52 |
| 0.0267 | 0.0340 | 0.3131 | 0.7664 | 0.0208 | 1.7372 | 53 |
| 0.0243 | 0.0340 | 1.3154 | 0.7525 | 0.0209 | 0.7770 | 54 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
CiroN2022/awesome-toys
|
CiroN2022
| 2023-09-04T22:16:54Z | 10 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] |
text-to-image
| 2023-09-04T22:16:51Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: awe_toys
widget:
- text: awe_toys
---
# Awesome Toys

<p>Example prompts:</p><ul><li><p>Rocky Vader: A mashup of the iconic Rocky Balboa and Darth Vader, bringing the power of the Force to the boxing ring. With boxing:0.6 and sci-fi:0.4 elements, this action figure packs a punch!</p></li><li><p>SpiderPool: Part Spider-Man, part Deadpool, this acrobatic antihero swings into action with equal parts wit:0.5 and wall-crawling skills:0.5, making it a fan-favorite collectible.</p></li><li><p>WonderFury: A blend of Wonder Woman and Mad Max, this fierce warrior combines superhero:0.7 and post-apocalyptic:0.3 vibes for a truly unique action figure.</p></li><li><p>The Jokernator: A fusion of The Joker and The Terminator, this figure boasts chaos:0.6 and robotic precision:0.4, making it a charismatic yet deadly adversary.</p></li><li><p>Hannibal T-lecter: A crossover between Hannibal Lecter and the T-800, this action figure oozes cannibalistic charm:0.6 and cyborg menace:0.4.</p></li><li><p>Wolverine Ranger: A hybrid of Wolverine and the Power Rangers, this figure combines mutant powers:0.6 with colorful teamwork:0.4 for epic battles against evil.</p></li><li><p>Captain Frodo: Mixing Captain America and Frodo Baggins, this action figure embodies courage:0.7 and hobbit-sized heroics:0.3, perfect for fantasy adventures.</p></li><li><p>Yoda Trooper: A fusion of Yoda and a Stormtrooper, this figure brings wisdom:0.6 and galactic loyalty:0.4 to the forefront of the battle against the dark side.</p></li><li><p>SuperPirate: Combining Superman and Captain Jack Sparrow, this action figure marries superhero strength:0.6 with pirate swagger:0.4 on the high seas.</p></li><li><p>Hellboy Potter: Merging Hellboy with Harry Potter, this figure wields supernatural abilities:0.6 alongside wizardry:0.4, ready to take on any mystical threat.</p></li><li><p>BatThor: A fusion of Batman and Thor, this action figure strikes a balance between vigilante justice:0.5 and godly thunder:0.5.</p></li><li><p>PredaFlash: Mixing Predator and The Flash, this figure races through the jungle:0.6 with lightning speed:0.4, hunting its prey in the blink of an eye.</p></li><li><p>Zorrotrax: A crossover between Zorro and Black Panther, this action figure showcases swashbuckling finesse:0.6 and Wakandan technology:0.4.</p></li><li><p>Hulk Solo: Combining The Hulk and Han Solo, this figure embodies rage-induced strength:0.6 and smuggler charisma:0.4.</p></li><li><p>Iron-Scorpion: Merging Iron Man and Scorpion from Mortal Kombat, this action figure boasts high-tech armor:0.6 and a deadly stinger:0.4.</p></li><li><p>PredaDredd: A fusion of Predator and Judge Dredd, this figure enforces brutal justice:0.6 with alien cunning:0.4 in a dystopian future.</p></li><li><p>Venom-Terminator: Mixing Venom and The Terminator, this action figure embodies symbiotic menace:0.6 and relentless cyborg pursuit:0.4.</p></li><li><p>Deadstroke: A crossover between Deadpool and Deathstroke, this figure is a master of both humor:0.5 and mercenary skills:0.5.</p></li><li><p>Grootpool: Combining Groot and Deadpool, this action figure offers a mix of lovable tree antics:0.6 and chaotic humor:0.4.</p></li><li><p>Robo-Hannibal: Merging RoboCop and Hannibal Lecter, this figure patrols the streets:0.6 while harboring a taste for the macabre:0.4.</p></li><li><p>Black-Widow Trooper: A hybrid of Black Widow and a Stormtrooper, this action figure embodies espionage:0.6 and galactic loyalty:0.4.</p></li><li><p>Spock Vader: Mixing Spock from Star Trek and Darth Vader, this figure is a logical yet formidable force:0.6 in the galaxy.</p></li><li><p>Green Arrowwing: A fusion of Green Arrow and Hawkeye, this action figure boasts archery precision:0.6 and vigilante justice:0.4.</p></li><li><p>Hermoine Terminator: Combining Hermione Granger and The Terminator, this figure wields wizardry:0.6 alongside robotic determination:0.4.</p></li><li><p>Thorlock Holmes: A crossover between Thor and Sherlock Holmes, this action figure wields godly powers:0.6 and deductive reasoning:0.4.</p></li><li><p>Aquaman of Steel: Merging Aquaman and Superman, this figure combines underwater strength:0.6 with Kryptonian might:0.4.</p></li><li><p>Dare-Wonder: A hybrid of Daredevil and Wonder Woman, this action figure embodies blind justice:0.6 and Amazonian warrior prowess:0.4.</p></li><li><p>Luke Sky-Bat: Mixing Luke Skywalker and Batman, this figure balances Jedi training:0.6 with dark knight detective skills:0.4.</p></li><li><p>Flashpool: Combining The Flash and Deadpool, this action figure races through battles:0.6 with a comedic edge:0.4.</p></li><li><p>Magneto Panther: A fusion of Magneto and Black Panther, this figure controls magnetic forces:0.6 and Wakandan technology:0.4.</p></li><li><p>Cyborg the Hedgehog: Merging Cyborg from DC and Sonic the Hedgehog, this action figure boasts high-tech enhancements:0.6 and supersonic speed:0.4.</p></li><li><p>Wonderpool Woman: A crossover between Wonder Woman and Deadpool, this figure is a warrior with an irreverent twist:0.5 and a lasso of humor:0.5.</p></li><li><p>Venom-Matrix: Combining Venom and Neo from The Matrix, this action figure embodies symbiotic chaos:0.6 and digital rebellion:0.4.</p></li><li><p>Thor-Pirate: Merging Thor and Captain Jack Sparrow, this figure wields Mjölnir:0.6 with a pirate's charm:0.4 on the high seas.</p></li><li><p>Super-Bond: A hybrid of Superman and James Bond, this action figure combines superhuman abilities:0.6 with spy gadgets:0.4.</p></li><li><p>Loki Ranger: Mixing Loki and the Power Rangers, this figure embodies trickster magic:0.6 and colorful teamwork:0.4.</p></li><li><p>Groot of the Galaxy: A fusion of Groot and Guardians of the Galaxy, this action figure offers a mix of tree heroics:0.6 and cosmic adventures:0.4.</p></li><li><p>HawkTrek: Combining Hawkeye and Star Trek, this figure boasts marksmanship:0.6 and interstellar exploration:0.4.</p></li><li><p>BatPirate: Merging Batman and Captain Jack Sparrow, this action figure patrols Gotham's waters:0.6 with swashbuckling flair:0.4.</p></li><li><p>Preda-Wonder: A crossover between Predator and Wonder Woman, this figure is a fierce warrior with extraterrestrial charm:0.5 and Amazonian strength:0.5.</p></li><li><p>Iron Khan: A fusion of Iron Man and Genghis Khan, this action figure combines high-tech armor:0.6 with conquering leadership:0.4, ready to lead any battle.</p></li><li><p>Aquawick: Mixing Aquaman and John Wick, this figure wields aquatic powers:0.6 alongside deadly assassin skills:0.4.</p></li><li><p>Black Widow-Strange: A hybrid of Black Widow and Doctor Strange, this action figure embodies espionage:0.5 and mystic mastery:0.5.</p></li><li><p>Deadthor: Combining Deadpool and Thor, this figure brings humor:0.5 and thunderous might:0.5 to any battle.</p></li><li><p>Harley-Witch: A crossover between Harley Quinn and the Witch from Left 4 Dead, this action figure offers chaos:0.6 with a touch of the supernatural:0.4.</p></li><li><p>Green-Alien Arrow: Merging Green Arrow and an Alien Xenomorph, this figure boasts archery precision:0.6 and extraterrestrial menace:0.4.</p></li><li><p>Robo-Hulk: A fusion of RoboCop and The Hulk, this action figure patrols the streets:0.6 while unleashing unstoppable rage:0.4.</p></li><li><p>ZorroVader: Mixing Zorro and Darth Vader, this figure is a swashbuckling Sith Lord with a penchant for dueling:0.5 and tyranny:0.5.</p></li><li><p>Preda-Pirate: A hybrid of Predator and a classic Pirate, this action figure hunts its prey with alien cunning:0.6 and swashbuckling flair:0.4.</p></li><li><p>Wolverine-Samurai: Combining Wolverine and a Samurai, this figure embodies mutant ferocity:0.6 with disciplined swordsmanship:0.4.</p></li><li><p>Flash-Matrix: A fusion of The Flash and Neo from The Matrix, this action figure races through the digital world:0.6 with incredible speed:0.4.</p></li><li><p>Hannibal-Joker: Merging Hannibal Lecter and The Joker, this figure combines culinary skills:0.6 with chaotic madness:0.4.</p></li><li><p>Super-Ranger: Combining Superman and a Power Ranger, this action figure wields superhuman strength:0.6 alongside colorful teamwork:0.4.</p></li><li><p>Wonder-Scorpion: A crossover between Wonder Woman and Scorpion from Mortal Kombat, this figure is a warrior with a stinger:0.5 and an Amazonian spirit:0.5.</p></li><li><p>Venom-Trek: Mixing Venom and Star Trek, this action figure embodies symbiotic exploration:0.6 and interstellar chaos:0.4.</p></li><li><p>Predator-Pool: A blend of Predator and Deadpool, this figure hunts with humor:0.5 and extraterrestrial cunning:0.5.</p></li><li><p>Bat-Hannibal: Combining Batman and Hannibal Lecter, this action figure patrols Gotham City:0.6 while savoring the macabre:0.4.</p></li><li><p>Gandalf-Ranger: Merging Gandalf and a Power Ranger, this figure wields wizardry:0.6 with colorful teamwork:0.4 in the fight against evil.</p></li><li><p>Thor-Sherlock: A fusion of Thor and Sherlock Holmes, this action figure balances godly strength:0.5 with deductive reasoning:0.5.</p></li><li><p>Cyber-Amazon: Combining Cyborg and Wonder Woman, this figure embodies technological prowess:0.6 with Amazonian warrior spirit:0.4.</p></li><li><p>Dare-Sparrow: A crossover between Daredevil and Captain Jack Sparrow, this action figure combines blind justice:0.6 with pirate swagger:0.4.</p></li><li><p>Luke-Witcher: Mixing Luke Skywalker and Geralt from The Witcher, this figure wields a lightsaber:0.6 alongside monster hunting skills:0.4.</p></li><li><p>Flash-Groot: A hybrid of The Flash and Groot, this action figure races through adventures:0.6 with a lovable tree's charm:0.4.</p></li><li><p>Magneto-Matrix: Combining Magneto and Neo from The Matrix, this figure controls magnetic forces:0.6 and challenges the digital world:0.4.</p></li><li><p>Hulk-Ranger: Merging The Hulk and a Power Ranger, this action figure embodies gamma-powered teamwork:0.6 and heroic strength:0.4.</p></li><li><p>Super-Spock: A fusion of Superman and Spock from Star Trek, this figure combines Kryptonian might:0.6 with logical precision:0.4.</p></li><li><p>Bat-Matrix: Combining Batman and Neo from The Matrix, this action figure patrols Gotham's digital streets:0.6 with martial arts mastery:0.4.</p></li><li><p>Preda-Bond: A crossover between Predator and James Bond, this figure hunts with alien technology:0.6 and spy gadgets:0.4.</p></li><li><p>Zorro-Hannibal: Mixing Zorro and Hannibal Lecter, this action figure is a swashbuckling gourmet:0.5 with a taste for the theatrical:0.5.</p></li><li><p>Wonder-Groot: A hybrid of Wonder Woman and Groot, this figure wields an Amazonian lasso:0.6 with a tree's gentle strength:0.4.</p></li><li><p>Venom-Elf: Combining Venom and Legolas from Lord of the Rings, this action figure embodies symbiotic archery:0.6 and elven agility:0.4.</p></li><li><p>Hulk-Witch: Merging The Hulk and a Witch from a fairy tale, this figure smashes with gamma-powered fury:0.6 while wielding mystical powers:0.4.</p></li><li><p>Green-Predator Arrow: A fusion of Green Arrow and Predator, this action figure boasts archery precision:0.6 and extraterrestrial hunting skills:0.4.</p></li><li><p>Robo-Hannibal Bond: Combining RoboCop, Hannibal Lecter, and James Bond, this figure patrols the streets:0.4 while savoring the macabre:0.3 and using spy gadgets:0.3.</p></li><li><p>Zorro-Trek: Mixing Zorro and Star Trek, this action figure is a swashbuckling space explorer:0.5 with a flair for diplomacy:0.5.</p></li><li><p>Preda-Ranger: A crossover between Predator and a Power Ranger, this figure hunts with extraterrestrial cunning:0.6 and colorful teamwork:0.4.</p></li><li><p>Wolverine-Pirate: Combining Wolverine and Captain Jack Sparrow, this action figure embodies mutant ferocity:0.6 with swashbuckling charm:0.4.</p></li><li><p>Flash-Matrix Assassin: Merging The Flash, Neo from The Matrix, and John Wick, this figure races through digital worlds:0.3 with speed:0.3 while wielding martial arts:0.2 and gun-fu skills:0.2.</p></li><li><p>Hannibal-Witch: A blend of Hannibal Lecter and a Witch from a fairy tale, this figure savors the macabre:0.5 while wielding mystical powers:0.5.</p></li><li><p>Super-Robo Bond: Combining Superman, RoboCop, and James Bond, this action figure possesses superhuman strength:0.3, patrols the streets:0.3 with robotic precision:0.2, and uses spy gadgets:0.2.</p></li><li><p>Thor-Samurai: Merging Thor and a Samurai, this figure wields godly strength:0.4 and disciplined swordsmanship:0.4.</p></li><li><p>Cyber-Hawkeye: A fusion of Cyborg and Hawkeye, this action figure embodies technological prowess:0.4 and archery precision:0.4.</p></li><li><p>Flash-Spock: Combining The Flash and Spock from Star Trek, this figure races through adventures:0.4 with logical precision:0.4.</p></li><li><p>Wolverine-Matrix: A hybrid of Wolverine and Neo from The Matrix, this action figure embodies mutant ferocity:0.4 and challenges the digital world:0.4.</p></li><li><p>Venom-Witcher: Mixing Venom and Geralt from The Witcher, this figure embodies symbiotic chaos:0.4 and monster hunting skills:0.4.</p></li><li><p>Green-Hannibal Arrow: A crossover between Green Arrow, Hannibal Lecter, and the Witch from Left 4 Dead, this action figure boasts archery precision:0.3, savoring the macabre:0.3, and supernatural power:0.3.</p></li><li><p>Hulk-Zorro: Merging The Hulk and Zorro, this figure smashes with gamma-powered fury:0.4 while showcasing swashbuckling finesse:0.4.</p></li><li><p>Super-Predator: A fusion of Superman and Predator, this action figure possesses superhuman strength:0.4 and extraterrestrial hunting skills:0.4.</p></li><li><p>Preda-Witcher: Combining Predator and Geralt from The Witcher, this figure hunts with extraterrestrial cunning:0.4 and monster slaying skills:0.4.</p></li><li><p>Robo-Gandalf: A hybrid of RoboCop and Gandalf, this action figure patrols the streets:0.4 with magical wisdom:0.4.</p></li><li><p>Zorro-Elf Arrow: Mixing Zorro, Legolas from Lord of the Rings, and Green Arrow, this figure is a swashbuckling archer:0.3 with elven agility:0.3 and archery precision:0.3.</p></li><li><p>Hannibal-Trek: A crossover between Hannibal Lecter and Star Trek, this action figure savors the macabre:0.4 while exploring the cosmos:0.4.</p></li><li><p>Wonder-Pirate Woman: Combining Wonder Woman and Captain Jack Sparrow, this figure wields Amazonian strength:0.4 and pirate swagger:0.4.</p></li><li><p>Flash-Zorro: Merging The Flash and Zorro, this action figure races through adventures:0.4 while showcasing swashbuckling finesse:0.4.</p></li><li><p>Preda-Hawkeye: A fusion of Predator and Hawkeye, this figure hunts with extraterrestrial cunning:0.4 and archery precision:0.4.</p></li><li><p>Iron-Matrix Man: Combining Iron Man and Neo from The Matrix, this action figure possesses high-tech armor:0.4 and challenges the digital world:0.4.</p></li><li><p>Thor-Bond: A blend of Thor and James Bond, this figure wields godly strength:0.4 and uses spy gadgets:0.4.</p></li><li><p>Bat-Elf Holmes: Mixing Batman, Legolas from Lord of the Rings, and Sherlock Holmes, this action figure patrols Gotham:0.3 with elven agility:0.3 and deductive reasoning:0.3.</p></li><li><p>Green-Gandalf Arrow: A crossover between Green Arrow, Gandalf, and Legolas from Lord of the Rings, this figure boasts archery precision:0.3, magical wisdom:0.3, and elven agility:0.3.</p></li><li><p>Robo-Pirate Cop: Merging RoboCop and Captain Jack Sparrow, this action figure patrols the high seas:0.4 with robotic precision:0.3 and pirate swagger:0.3.</p></li><li><p>Zorro-Groot Arrow: Combining Zorro, Groot, and Green Arrow, this figure is a swashbuckling tree archer:0.3 with elven agility:0.3 and archery precision:0.3.</p></li><li><p>Hannibal-Gandalf Holmes: A fusion of Hannibal Lecter, Gandalf, and Sherlock Holmes, this action figure savors the macabre:0.3, wields magical wisdom:0.3, and employs deductive reasoning:0.3.</p></li><li><p>Flash-Pirate Holmes: Mixing The Flash, Captain Jack Sparrow, and Sherlock Holmes, this figure races through adventures:0.3 with pirate swagger:0.3 and deductive reasoning:0.3.</p></li><li><p>Preda-Elf Arrow: A hybrid of Predator, Legolas from Lord of the Rings, and Green Arrow, this action figure hunts with extraterrestrial cunning:0.3, elven agility:0.3, and archery precision:0.3.</p></li><li><p>Iron-Matrix Pirate: Combining Iron Man, Neo from The Matrix, and Captain Jack Sparrow, this figure possesses high-tech armor:0.3, challenges the digital world:0.3, and sails the high seas:0.3.</p></li><li><p>Thor-Hannibal Cop: A blend of Thor, Hannibal Lecter, and RoboCop, this action figure wields godly strength:0.3, savors the macabre:0.3, and patrols the streets:0.3.</p></li><li><p>Wonder-Zorro Woman: Mixing Wonder Woman, Zorro, and Sherlock Holmes, this figure wields Amazonian strength:0.3, showcases swashbuckling finesse:0.3, and employs deductive reasoning:0.3.</p></li><li><p>Venom-Pirate Matrix: A crossover between Venom, Captain Jack Sparrow, and Neo from The Matrix, this action figure embodies symbiotic chaos:0.3, pirate swagger:0.3, and digital rebellion:0.3.</p></li><li><p>Flash-Groot Arrow: Combining The Flash, Groot, and Green Arrow, this figure races through adventures:0.3, offers a lovable tree's charm:0.3, and boasts archery precision:0.3.</p></li><li><p>Preda-Hulk Arrow: Merging Predator, The Hulk, and Green Arrow, this action figure hunts with extraterrestrial cunning:0.3, unleashes unstoppable rage:0.3, and showcases archery precision:0.3.</p></li></ul>
## Image examples for the model:









|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0050
|
bigmorning
| 2023-09-04T22:13:05Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T22:12:56Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0050
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0050
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0499
- Train Accuracy: 0.0339
- Train Wermet: 0.0487
- Validation Loss: 0.7587
- Validation Accuracy: 0.0208
- Validation Wermet: 0.3030
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
| 0.2009 | 0.0327 | 0.0596 | 0.7723 | 0.0207 | 0.2712 | 40 |
| 0.1750 | 0.0329 | 0.0504 | 0.7629 | 0.0207 | 0.2692 | 41 |
| 0.1510 | 0.0331 | 0.0410 | 0.7650 | 0.0207 | 0.2684 | 42 |
| 0.1319 | 0.0333 | 0.0367 | 0.7533 | 0.0207 | 0.2655 | 43 |
| 0.1121 | 0.0335 | 0.0292 | 0.7589 | 0.0207 | 0.2647 | 44 |
| 0.0956 | 0.0336 | 0.0253 | 0.7579 | 0.0208 | 0.2642 | 45 |
| 0.0812 | 0.0337 | 0.0254 | 0.7584 | 0.0208 | 0.2625 | 46 |
| 0.0694 | 0.0338 | 0.0332 | 0.7555 | 0.0208 | 0.2693 | 47 |
| 0.0592 | 0.0339 | 0.0319 | 0.7534 | 0.0208 | 0.2629 | 48 |
| 0.0499 | 0.0339 | 0.0487 | 0.7587 | 0.0208 | 0.3030 | 49 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
acdg1214/a2c-PandaReachDense-v3
|
acdg1214
| 2023-09-04T22:01:09Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T21:55:50Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.15 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0045
|
bigmorning
| 2023-09-04T21:59:45Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T21:59:38Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0045
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0045
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1121
- Train Accuracy: 0.0335
- Train Wermet: 0.0292
- Validation Loss: 0.7589
- Validation Accuracy: 0.0207
- Validation Wermet: 0.2647
- Epoch: 44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
| 0.2009 | 0.0327 | 0.0596 | 0.7723 | 0.0207 | 0.2712 | 40 |
| 0.1750 | 0.0329 | 0.0504 | 0.7629 | 0.0207 | 0.2692 | 41 |
| 0.1510 | 0.0331 | 0.0410 | 0.7650 | 0.0207 | 0.2684 | 42 |
| 0.1319 | 0.0333 | 0.0367 | 0.7533 | 0.0207 | 0.2655 | 43 |
| 0.1121 | 0.0335 | 0.0292 | 0.7589 | 0.0207 | 0.2647 | 44 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
rohanbalkondekar/rohan_dreambooth
|
rohanbalkondekar
| 2023-09-04T21:52:29Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-09-04T21:52:28Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of rohan balkondekar
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
nbogdan/flant5-base-1ex-bridging-1epochs
|
nbogdan
| 2023-09-04T21:50:03Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T21:49:53Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-1ex-bridging-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-1ex-bridging-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0040
|
bigmorning
| 2023-09-04T21:46:29Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T21:46:21Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0040
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0040
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2291
- Train Accuracy: 0.0324
- Train Wermet: 0.0706
- Validation Loss: 0.7876
- Validation Accuracy: 0.0206
- Validation Wermet: 0.2755
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
| 1.3218 | 0.0249 | 0.3581 | 1.4682 | 0.0179 | 0.4510 | 25 |
| 1.1383 | 0.0260 | 0.3211 | 1.3465 | 0.0183 | 0.4226 | 26 |
| 0.9876 | 0.0270 | 0.2920 | 1.2323 | 0.0188 | 0.3966 | 27 |
| 0.8635 | 0.0278 | 0.2651 | 1.1482 | 0.0191 | 0.3749 | 28 |
| 0.7620 | 0.0284 | 0.2435 | 1.0816 | 0.0194 | 0.3565 | 29 |
| 0.6749 | 0.0290 | 0.2234 | 1.0187 | 0.0196 | 0.3433 | 30 |
| 0.5998 | 0.0295 | 0.2025 | 0.9761 | 0.0198 | 0.3319 | 31 |
| 0.5325 | 0.0300 | 0.1827 | 0.9326 | 0.0200 | 0.3213 | 32 |
| 0.4735 | 0.0305 | 0.1665 | 0.8942 | 0.0201 | 0.3110 | 33 |
| 0.4228 | 0.0308 | 0.1466 | 0.8735 | 0.0202 | 0.3026 | 34 |
| 0.3747 | 0.0312 | 0.1293 | 0.8408 | 0.0203 | 0.2931 | 35 |
| 0.3331 | 0.0316 | 0.1111 | 0.8253 | 0.0204 | 0.2891 | 36 |
| 0.2947 | 0.0319 | 0.0962 | 0.8084 | 0.0205 | 0.2849 | 37 |
| 0.2601 | 0.0322 | 0.0817 | 0.7906 | 0.0205 | 0.2783 | 38 |
| 0.2291 | 0.0324 | 0.0706 | 0.7876 | 0.0206 | 0.2755 | 39 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
active-learning/mnist_classifier
|
active-learning
| 2023-09-04T21:46:17Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-02-03T13:18:22Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
|
nbogdan/flant5-base-1ex-elaboration-1epochs
|
nbogdan
| 2023-09-04T21:34:54Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T21:34:46Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-1ex-elaboration-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-1ex-elaboration-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
JanSt/gbert-base-finetuned-twitter_
|
JanSt
| 2023-09-04T21:31:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:deepset/gbert-base",
"base_model:finetune:deepset/gbert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-04T08:13:18Z |
---
license: mit
base_model: deepset/gbert-base
tags:
- generated_from_trainer
model-index:
- name: gbert-base-finetuned-twitter_
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gbert-base-finetuned-twitter_
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 192
- eval_batch_size: 192
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1933 | 1.0 | 4180 | 1.9612 |
| 2.0051 | 2.0 | 8360 | 1.8795 |
| 1.939 | 3.0 | 12540 | 1.8310 |
| 1.8928 | 4.0 | 16720 | 1.8013 |
| 1.8594 | 5.0 | 20900 | 1.7730 |
| 1.8336 | 6.0 | 25080 | 1.7702 |
| 1.8145 | 7.0 | 29260 | 1.7449 |
| 1.7963 | 8.0 | 33440 | 1.7277 |
| 1.7806 | 9.0 | 37620 | 1.7105 |
| 1.7682 | 10.0 | 41800 | 1.7061 |
| 1.7584 | 11.0 | 45980 | 1.7041 |
| 1.7454 | 12.0 | 50160 | 1.6899 |
| 1.7374 | 13.0 | 54340 | 1.6850 |
| 1.7295 | 14.0 | 58520 | 1.6856 |
| 1.7232 | 15.0 | 62700 | 1.6819 |
| 1.715 | 16.0 | 66880 | 1.6730 |
| 1.7101 | 17.0 | 71060 | 1.6723 |
| 1.7057 | 18.0 | 75240 | 1.6655 |
| 1.7038 | 19.0 | 79420 | 1.6617 |
| 1.702 | 20.0 | 83600 | 1.6625 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
facebook/regnet-x-160
|
facebook
| 2023-09-04T21:27:33Z | 402 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"regnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-18T15:27:57Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
|
volvoDon/mr-golem
|
volvoDon
| 2023-09-04T21:26:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-03T23:15:47Z |
---
library_name: peft
---
## Training procedure
This is a funny CausalLLM That was Trained on the full ~150 pages of the Necronomicon
## Scope of Use
Absolutely Just for Fun, *Be advised it was trained on Occult Text so it might say offensive or confusing things*
### Framework versions
- PEFT 0.5.0
|
volvoDon/petro-daemon
|
volvoDon
| 2023-09-04T21:21:25Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-04T20:11:04Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: volvoDon/petro-daemon
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# volvoDon/petro-daemon
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on a [DataSet of petrologic cross sections](https://huggingface.co/datasets/volvoDon/petrology-sections).
It achieves the following results on the evaluation set:
- Train Loss: 0.8890
- Validation Loss: 1.1803
- Train Accuracy: 0.6
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
Currently it is just a proof of concept and does a great job identifiying Olivine
It currently is not ready for a production enviroment but the results are promising, with an improved dataset I'm confident better results could be acheived.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 300, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.6519 | 1.7095 | 0.2 | 0 |
| 1.5905 | 1.6747 | 0.2 | 1 |
| 1.5690 | 1.6342 | 0.2 | 2 |
| 1.5170 | 1.5931 | 0.2 | 3 |
| 1.4764 | 1.5528 | 0.6 | 4 |
| 1.3835 | 1.5079 | 0.6 | 5 |
| 1.3420 | 1.4717 | 0.6 | 6 |
| 1.3171 | 1.4232 | 0.6 | 7 |
| 1.2897 | 1.3905 | 0.6 | 8 |
| 1.2702 | 1.3794 | 0.6 | 9 |
| 1.2023 | 1.3351 | 0.6 | 10 |
| 1.1480 | 1.3384 | 0.6 | 11 |
| 1.1434 | 1.3419 | 0.6 | 12 |
| 1.0499 | 1.3226 | 0.6 | 13 |
| 1.0672 | 1.2647 | 0.6 | 14 |
| 1.0526 | 1.1533 | 0.6 | 15 |
| 1.0184 | 1.1546 | 0.6 | 16 |
| 0.9505 | 1.2491 | 0.6 | 17 |
| 0.9578 | 1.2809 | 0.4 | 18 |
| 0.8890 | 1.1803 | 0.6 | 19 |
### Framework versions
- Transformers 4.32.1
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
actionpace/LLAMA2-13B-Holodeck-1
|
actionpace
| 2023-09-04T21:21:09Z | 7 | 0 | null |
[
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-09-04T20:46:13Z |
---
license: other
language:
- en
---
**Some of my own quants:**
* LLAMA2-13B-Holodeck-1_Q5_1_4K.gguf
* LLAMA2-13B-Holodeck-1_Q5_1_8K.gguf
**Source:** [KoboldAI](https://huggingface.co/KoboldAI)
**Source Model:** [LLAMA2-13B-Holodeck-1](https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1)
**Models utilizing KoboldAI/LLAMA2-13B-Holodeck-1**
- [The-Face-Of-Goonery/Huginn-v3-13b](https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b) ([Ref](https://huggingface.co/actionpace/Huginn-v3-13b)) (Finetune, kaiokendev/SuperCOT-dataset)
|
nbogdan/flant5-base-1ex-paraphrasing-1epochs
|
nbogdan
| 2023-09-04T21:20:32Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T21:20:22Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-1ex-paraphrasing-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-1ex-paraphrasing-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
facebook/wav2vec2-large-it-voxpopuli
|
facebook
| 2023-09-04T21:15:34Z | 395 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"pretraining",
"audio",
"automatic-speech-recognition",
"voxpopuli",
"it",
"arxiv:2101.00390",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: it
tags:
- audio
- automatic-speech-recognition
- voxpopuli
license: cc-by-nc-4.0
---
# Wav2Vec2-Large-VoxPopuli
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) large model pretrained on the it unlabeled subset of [VoxPopuli corpus](https://arxiv.org/abs/2101.00390).
**Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation
Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)*
**Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*
See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/)
# Fine-Tuning
Please refer to [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) on how to fine-tune this model on a specific language. Note that you should replace `"facebook/wav2vec2-large-xlsr-53"` with this checkpoint for fine-tuning.
|
facebook/convnext-base-224-22k-1k
|
facebook
| 2023-09-04T21:09:35Z | 653 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"convnext",
"image-classification",
"vision",
"dataset:imagenet-21k",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ConvNeXT (base-sized model)
ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
import torch
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-base-224-22k-1k")
model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-384-224-1k")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 1000 ImageNet classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]),
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2201-03545,
author = {Zhuang Liu and
Hanzi Mao and
Chao{-}Yuan Wu and
Christoph Feichtenhofer and
Trevor Darrell and
Saining Xie},
title = {A ConvNet for the 2020s},
journal = {CoRR},
volume = {abs/2201.03545},
year = {2022},
url = {https://arxiv.org/abs/2201.03545},
eprinttype = {arXiv},
eprint = {2201.03545},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0025
|
bigmorning
| 2023-09-04T21:06:45Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T21:06:37Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0025
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0025
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5547
- Train Accuracy: 0.0235
- Train Wermet: 0.3996
- Validation Loss: 1.6356
- Validation Accuracy: 0.0172
- Validation Wermet: 0.4848
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
| 2.5649 | 0.0182 | 0.5674 | 2.4490 | 0.0145 | 0.6383 | 20 |
| 2.3414 | 0.0193 | 0.5299 | 2.2785 | 0.0150 | 0.6183 | 21 |
| 2.0966 | 0.0206 | 0.4903 | 2.0460 | 0.0158 | 0.5649 | 22 |
| 1.8283 | 0.0220 | 0.4459 | 1.8369 | 0.0165 | 0.5306 | 23 |
| 1.5547 | 0.0235 | 0.3996 | 1.6356 | 0.0172 | 0.4848 | 24 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
nbogdan/flant5-base-1ex-overall-1epochs
|
nbogdan
| 2023-09-04T21:04:30Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T21:04:21Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-base-1ex-overall-1epochs` for google/flan-t5-base
An [adapter](https://adapterhub.ml) for the `google/flan-t5-base` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-base")
adapter_name = model.load_adapter("nbogdan/flant5-base-1ex-overall-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
jonc/ybelkada-opt-6.7b-lora
|
jonc
| 2023-09-04T20:59:50Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-04T20:59:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
SandraDee/ppo-LunarLander-v2
|
SandraDee
| 2023-09-04T20:57:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T20:57:31Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.48 +/- 13.92
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
darthlordvictor/generative-bloom-marketing-002
|
darthlordvictor
| 2023-09-04T20:56:22Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-29T02:38:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0020
|
bigmorning
| 2023-09-04T20:53:30Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T20:53:22Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0020
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0020
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7559
- Train Accuracy: 0.0173
- Train Wermet: 0.5940
- Validation Loss: 2.6337
- Validation Accuracy: 0.0139
- Validation Wermet: 0.6673
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
| 3.2229 | 0.0152 | 0.6781 | 3.0542 | 0.0125 | 0.7532 | 15 |
| 3.1334 | 0.0156 | 0.6614 | 2.9840 | 0.0127 | 0.7448 | 16 |
| 3.0313 | 0.0160 | 0.6425 | 2.9032 | 0.0130 | 0.7123 | 17 |
| 2.9122 | 0.0166 | 0.6202 | 2.7986 | 0.0134 | 0.6930 | 18 |
| 2.7559 | 0.0173 | 0.5940 | 2.6337 | 0.0139 | 0.6673 | 19 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0015
|
bigmorning
| 2023-09-04T20:40:15Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T20:40:08Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.3069
- Train Accuracy: 0.0148
- Train Wermet: 0.6961
- Validation Loss: 3.1102
- Validation Accuracy: 0.0124
- Validation Wermet: 0.7609
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
| 3.6023 | 0.0138 | 0.7454 | 3.2711 | 0.0117 | 0.8006 | 10 |
| 3.5261 | 0.0140 | 0.7348 | 3.2391 | 0.0119 | 0.8101 | 11 |
| 3.4534 | 0.0143 | 0.7212 | 3.2070 | 0.0120 | 0.7870 | 12 |
| 3.3814 | 0.0146 | 0.7080 | 3.1505 | 0.0122 | 0.7826 | 13 |
| 3.3069 | 0.0148 | 0.6961 | 3.1102 | 0.0124 | 0.7609 | 14 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
reginaboateng/BERT_pubmedqa_adapter_with_maybes_to_yes_updated
|
reginaboateng
| 2023-09-04T20:31:22Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:pubmedqa",
"dataset:pubmedqa",
"region:us"
] | null | 2023-09-04T20:31:19Z |
---
tags:
- bert
- adapter-transformers
- adapterhub:pubmedqa
datasets:
- pubmedqa
---
# Adapter `reginaboateng/BERT_pubmedqa_adapter_with_maybes_to_yes_updated` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pubmedqa](https://adapterhub.ml/explore/pubmedqa/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("reginaboateng/BERT_pubmedqa_adapter_with_maybes_to_yes_updated", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
bigmorning/whisper_input_decoder_shift_r_labels_no_force__0010
|
bigmorning
| 2023-09-04T20:27:01Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T20:26:52Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_no_force__0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_no_force__0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.6757
- Train Accuracy: 0.0136
- Train Wermet: 0.7548
- Validation Loss: 3.3141
- Validation Accuracy: 0.0116
- Validation Wermet: 0.8400
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6348 | 0.0091 | 1.5865 | 4.2935 | 0.0093 | 0.9579 | 0 |
| 4.9212 | 0.0099 | 0.9054 | 4.1262 | 0.0097 | 0.9390 | 1 |
| 4.6819 | 0.0107 | 0.8319 | 3.9071 | 0.0103 | 0.8966 | 2 |
| 4.4443 | 0.0114 | 0.8310 | 3.7367 | 0.0106 | 0.8939 | 3 |
| 4.2479 | 0.0119 | 0.8226 | 3.6101 | 0.0109 | 0.8696 | 4 |
| 4.0911 | 0.0124 | 0.8103 | 3.5364 | 0.0110 | 0.8946 | 5 |
| 3.9590 | 0.0127 | 0.7913 | 3.4556 | 0.0113 | 0.8388 | 6 |
| 3.8513 | 0.0130 | 0.7794 | 3.4106 | 0.0114 | 0.8515 | 7 |
| 3.7607 | 0.0133 | 0.7657 | 3.3507 | 0.0115 | 0.8261 | 8 |
| 3.6757 | 0.0136 | 0.7548 | 3.3141 | 0.0116 | 0.8400 | 9 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
96abhishekarora/lt-kn-en_familyname-linkage
|
96abhishekarora
| 2023-09-04T20:22:33Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"linktransformer",
"sentence-similarity",
"tabular-classification",
"kn",
"en",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-09-01T02:58:31Z |
---
pipeline_tag: sentence-similarity
language:
- kn
- en
tags:
- linktransformer
- sentence-transformers
- sentence-similarity
- tabular-classification
---
# 96abhishekarora/lt-kn-en_familyname-linkage
This is a [LinkTransformer](https://github.com/dell-research-harvard/linktransformer) model. At its core this model this is a sentence transformer model [sentence-transformers](https://www.SBERT.net) model- it just wraps around the class.
It is designed for quick and easy record linkage (entity-matching) through the LinkTransformer package. The tasks include clustering, deduplication, linking, aggregation and more.
Notwithstanding that, it can be used for any sentence similarity task within the sentence-transformers framework as well.
It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Take a look at the documentation of [sentence-transformers](https://www.sbert.net/index.html) if you want to use this model for more than what we support in our applications.
This model has been fine-tuned on the model : bert-base-multilingual-cased. It is pretrained for the language : - kn
- en.
This model was trained on a dataset consisting of 12105132 people and their family id. 50% of the names are alo transliterated.
It was trained for 6 epochs using other defaults that can be found in the repo's LinkTransformer config file - LT_training_config.json
## Usage (LinkTransformer)
Using this model becomes easy when you have [LinkTransformer](https://github.com/dell-research-harvard/linktransformer) installed:
```
pip install -U linktransformer
```
Then you can use the model like this:
```python
import linktransformer as lt
import pandas as pd
##Load the two dataframes that you want to link. For example, 2 dataframes with company names that are written differently
df1=pd.read_csv("data/df1.csv") ###This is the left dataframe with key CompanyName for instance
df2=pd.read_csv("data/df2.csv") ###This is the right dataframe with key CompanyName for instance
###Merge the two dataframes on the key column!
df_merged = lt.merge(df1, df2, on="CompanyName", how="inner")
##Done! The merged dataframe has a column called "score" that contains the similarity score between the two company names
```
## Training your own LinkTransformer model
Any Sentence Transformers can be used as a backbone by simply adding a pooling layer. Any other transformer on HuggingFace can also be used by specifying the option add_pooling_layer==True
The model was trained using SupCon loss.
Usage can be found in the package docs.
The training config can be found in the repo with the name LT_training_config.json
To replicate the training, you can download the file and specify the path in the config_path argument of the training function. You can also override the config by specifying the training_args argument.
Here is an example.
```python
##Consider the example in the paper that has a dataset of Mexican products and their tariff codes from 1947 and 1948 and we want train a model to link the two tariff codes.
saved_model_path = train_model(
model_path="hiiamsid/sentence_similarity_spanish_es",
dataset_path=dataset_path,
left_col_names=["description47"],
right_col_names=['description48'],
left_id_name=['tariffcode47'],
right_id_name=['tariffcode48'],
log_wandb=False,
config_path=LINKAGE_CONFIG_PATH,
training_args={"num_epochs": 1}
)
```
You can also use this package for deduplication (clusters a df on the supplied key column). Merging a fine class (like product) to a coarse class (like HS code) is also possible.
Read our paper and the documentation for more!
## Evaluation Results
<!--- Describe how your model was evaluated -->
You can evaluate the model using the [LinkTransformer](https://github.com/dell-research-harvard/linktransformer) package's inference functions.
We have provided a few datasets in the package for you to try out. We plan to host more datasets on Huggingface and our website (Coming soon) that you can take a look at.
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 186000 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.dataloader._InfiniteConstantSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`linktransformer.modified_sbert.losses.SupConLoss_wandb`
Parameters of the fit()-Method:
```
{
"epochs": 6,
"evaluation_steps": 18600,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-06
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1116000,
"weight_decay": 0.01
}
```
LinkTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
adyprat/Reinforce_cpv1
|
adyprat
| 2023-09-04T20:18:49Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-04T20:18:38Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_cpv1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
maysamalfiza/dummy-model
|
maysamalfiza
| 2023-09-04T20:05:27Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-09-04T19:48:38Z |
Model explanation
Welcome to my page!
"camembert-base"
|
IlyaGusev/saiga2_70b_gguf
|
IlyaGusev
| 2023-09-04T19:53:14Z | 97 | 12 | null |
[
"gguf",
"conversational",
"ru",
"dataset:IlyaGusev/ru_turbo_saiga",
"dataset:IlyaGusev/ru_sharegpt_cleaned",
"dataset:IlyaGusev/oasst1_ru_main_branch",
"dataset:IlyaGusev/gpt_roleplay_realm",
"dataset:lksy/ru_instruct_gpt4",
"license:llama2",
"region:us"
] |
text-generation
| 2023-09-04T19:31:40Z |
---
datasets:
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/gpt_roleplay_realm
- lksy/ru_instruct_gpt4
language:
- ru
inference: false
pipeline_tag: conversational
license: llama2
---
Llama.cpp compatible versions of an original [70B model](https://huggingface.co/IlyaGusev/saiga2_70b_lora).
* Download one of the versions, for example `ggml-model-q4_1.gguf`.
* Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
How to run:
```
sudo apt-get install git-lfs
pip install llama-cpp-python fire
python3 interact_llamacpp.py ggml-model-q4_1.gguf
```
System requirements:
* 45GB RAM for q4_1
|
venetis/distilbert-base-uncased-finetuned-3d-sentiment
|
venetis
| 2023-09-04T19:52:13Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T16:12:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-base-uncased-finetuned-3d-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-3d-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6641
- Accuracy: 0.7366
- Precision: 0.7377
- Recall: 0.7366
- F1: 0.7364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 12762
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8078 | 1.0 | 3190 | 0.8133 | 0.6628 | 0.6885 | 0.6628 | 0.6607 |
| 0.6227 | 2.0 | 6380 | 0.7637 | 0.6855 | 0.7103 | 0.6855 | 0.6849 |
| 0.5431 | 3.0 | 9570 | 0.6889 | 0.7047 | 0.7201 | 0.7047 | 0.7017 |
| 0.4585 | 4.0 | 12760 | 0.6641 | 0.7366 | 0.7377 | 0.7366 | 0.7364 |
| 0.3455 | 5.0 | 15950 | 0.8322 | 0.7203 | 0.7323 | 0.7203 | 0.7187 |
| 0.223 | 6.0 | 19140 | 0.9541 | 0.7205 | 0.7316 | 0.7205 | 0.7204 |
| 0.145 | 7.0 | 22330 | 1.1726 | 0.7196 | 0.7305 | 0.7196 | 0.7200 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
Jana1994/wav2vec2-large-xls-r-300m-jana-colab
|
Jana1994
| 2023-09-04T19:51:58Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-31T08:26:49Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-jana-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: cy
split: test
args: cy
metrics:
- name: Wer
type: wer
value: 0.6497412901000345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-jana-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8913
- Wer: 0.6497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.6444 | 1.67 | 200 | 2.9379 | 1.0 |
| 2.7964 | 3.33 | 400 | 1.9912 | 0.9927 |
| 1.1945 | 5.0 | 600 | 0.9492 | 0.7889 |
| 0.6065 | 6.67 | 800 | 0.8534 | 0.7137 |
| 0.3859 | 8.33 | 1000 | 0.8933 | 0.6689 |
| 0.2724 | 10.0 | 1200 | 0.8913 | 0.6497 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dmatekenya/wav2vec2-large-xls-r-300m-chichewa
|
dmatekenya
| 2023-09-04T19:47:30Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T17:49:52Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-chichewa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-chichewa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.9669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.2028 | 3.51 | 400 | inf | 0.9999 |
| 2.5353 | 7.02 | 800 | inf | 0.9743 |
| 1.8464 | 10.53 | 1200 | inf | 0.9777 |
| 1.6672 | 14.04 | 1600 | inf | 0.9669 |
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jorgeortizfuentes/chilean-spanish-incivility
|
jorgeortizfuentes
| 2023-09-04T19:42:39Z | 556 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"es",
"dataset:jorgeortizfuentes/toxicity_spanish_incivility_v3",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T19:35:55Z |
---
language:
- es
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- jorgeortizfuentes/toxicity_spanish_incivility_v3
metrics:
- f1
model-index:
- name: incivility-dv3-patana-chilean-spanish-bert-j63zilm4
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: jorgeortizfuentes/toxicity_spanish_incivility_v3
type: jorgeortizfuentes/toxicity_spanish_incivility_v3
split: validation
metrics:
- name: F1
type: f1
value: 0.9135014363230132
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# incivility-dv3-patana-chilean-spanish-bert-j63zilm4
This model is a fine-tuned version of [dccuchile/patana-chilean-spanish-bert](https://huggingface.co/dccuchile/patana-chilean-spanish-bert) on the jorgeortizfuentes/toxicity_spanish_incivility_v3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5672
- F1: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 13
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1351 | 5.0 | 455 | 0.4608 | 0.9119 |
| 0.0114 | 10.0 | 910 | 0.5672 | 0.9135 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aegon-h/Llama2-22B-Daydreamer-v3-GPT
|
aegon-h
| 2023-09-04T19:41:39Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-09-04T19:31:53Z |
---
inference: false
license: llama2
model_creator: Nick Perez
model_link: https://huggingface.co/nkpz/llama2-22b-daydreamer-v3
model_name: Llama2 22B Daydreamer2 v3
model_type: llama
quantized_by: agonh
---
# Llama2 22B Daydreamer2 v3
- Model creator: [Nick Perez](https://huggingface.co/nkpz)
- Original model: [Llama2 22B Daydreamer2 v3](https://huggingface.co/nkpz/llama2-22b-daydreamer-v3)
## Description
This repo contains GPTQ model files for [Nick Perez's Llama2 22B Daydreamer2 v3](https://huggingface.co/nkpz/llama2-22b-daydreamer-v3).
|
mrm8488/idefics-9b-ft-describe-diffusion-bf16-adapter
|
mrm8488
| 2023-09-04T19:39:01Z | 0 | 1 | null |
[
"generated_from_trainer",
"dataset:diffusiondb",
"base_model:HuggingFaceM4/idefics-9b",
"base_model:finetune:HuggingFaceM4/idefics-9b",
"license:other",
"region:us"
] | null | 2023-08-28T10:09:04Z |
---
license: other
base_model: HuggingFaceM4/idefics-9b
tags:
- generated_from_trainer
datasets:
- diffusiondb
model-index:
- name: idefics-9b-ft-describe-diffusion-bf16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# idefics-9b-ft-describe-diffusion-bf16
This model is a fine-tuned version of [HuggingFaceM4/idefics-9b](https://huggingface.co/HuggingFaceM4/idefics-9b) on the diffusiondb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0874 | 0.07 | 50 | 2.1257 |
| 2.0532 | 0.14 | 100 | 1.9973 |
| 1.9417 | 0.21 | 150 | 1.9246 |
| 1.8358 | 0.28 | 200 | 1.8735 |
| 1.8499 | 0.36 | 250 | 1.8305 |
| 1.7695 | 0.43 | 300 | 1.7770 |
| 1.7505 | 0.5 | 350 | 1.7454 |
| 1.713 | 0.57 | 400 | 1.7115 |
| 1.7352 | 0.64 | 450 | 1.6791 |
| 1.6689 | 0.71 | 500 | 1.6526 |
| 1.6183 | 0.78 | 550 | 1.6257 |
| 1.6118 | 0.85 | 600 | 1.6001 |
| 1.6095 | 0.92 | 650 | 1.5800 |
| 1.5598 | 1.0 | 700 | 1.5598 |
| 1.4785 | 1.07 | 750 | 1.5403 |
| 1.4999 | 1.14 | 800 | 1.5219 |
| 1.4589 | 1.21 | 850 | 1.5063 |
| 1.4559 | 1.28 | 900 | 1.4942 |
| 1.4332 | 1.35 | 950 | 1.4792 |
| 1.4859 | 1.42 | 1000 | 1.4658 |
| 1.3888 | 1.49 | 1050 | 1.4537 |
| 1.4032 | 1.56 | 1100 | 1.4445 |
| 1.3702 | 1.64 | 1150 | 1.4352 |
| 1.3625 | 1.71 | 1200 | 1.4276 |
| 1.4067 | 1.78 | 1250 | 1.4199 |
| 1.3829 | 1.85 | 1300 | 1.4149 |
| 1.4251 | 1.92 | 1350 | 1.4103 |
| 1.3619 | 1.99 | 1400 | 1.4081 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
aegon-h/Koala-13B-8K-GPT
|
aegon-h
| 2023-09-04T19:21:09Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-09-04T19:13:10Z |
---
inference: false
license: other
---
# Koala: A Dialogue Model for Academic Research
This repo contains the weights of the Koala 13B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 13B model.
## License
The model weights are intended for academic research only, subject to the
[model License of LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md),
[Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use),
and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb).
Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.
|
alexsherstinsky/llama-2-7b-hf-based-finetuned-using-ludwig-with-alpaca-for-code
|
alexsherstinsky
| 2023-09-04T19:15:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-04T16:27:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
onkarsus13/controlnet_stablediffusion_scenetextEraser
|
onkarsus13
| 2023-09-04T19:01:03Z | 36 | 0 |
diffusers
|
[
"diffusers",
"license:mit",
"diffusers:StableDiffusionControlNetInpaintPipeline",
"region:us"
] |
image-to-image
| 2023-08-17T05:36:34Z |
---
license: mit
---
This is the trained model for the controlnet-stablediffusion for the scene text eraser (Diff_SceneTextEraser)
We have to customize the pipeline for controlnet-stablediffusion-inpaint
Here is the training and inference code for [Diff_SceneTextEraser](https://github.com/Onkarsus13/Diff_SceneTextEraser)
For direct inference
step 1: Clone the GitHub repo to get the customized ControlNet-StableDiffusion-inpaint Pipeline Implementation
```
git clone https://github.com/Onkarsus13/Diff_SceneTextEraser
```
Step2: Go into the repository and install repository, dependency
```
cd Diff_SceneTextEraser
pip install -e ".[torch]"
pip install -e .[all,dev,notebooks]
```
Step3: Run `python test_eraser.py` OR You can run the code given below
```python
from diffusers import (
UniPCMultistepScheduler,
DDIMScheduler,
EulerAncestralDiscreteScheduler,
StableDiffusionControlNetSceneTextErasingPipeline,
)
import torch
import numpy as np
import cv2
from PIL import Image, ImageDraw
import math
import os
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model_path = "onkarsus13/controlnet_stablediffusion_scenetextEraser"
pipe = StableDiffusionControlNetSceneTextErasingPipeline.from_pretrained(model_path)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.to(device)
# pipe.enable_xformers_memory_efficient_attention()
pipe.enable_model_cpu_offload()
generator = torch.Generator(device).manual_seed(1)
image = Image.open("<path to scene text image>").resize((512, 512))
mask_image = Image.open('<path to the corrospoinding mask image>').resize((512, 512))
image = pipe(
image,
mask_image,
[mask_image],
num_inference_steps=20,
generator=generator,
controlnet_conditioning_scale=1.0,
guidance_scale=1.0
).images[0]
image.save('test1.png')
```
|
nbogdan/flant5-small-2ex-elaboration-1epochs
|
nbogdan
| 2023-09-04T18:53:39Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-04T18:53:31Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-small-2ex-elaboration-1epochs` for google/flan-t5-small
An [adapter](https://adapterhub.ml) for the `google/flan-t5-small` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-small")
adapter_name = model.load_adapter("nbogdan/flant5-small-2ex-elaboration-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.