modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 12:33:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 12:33:10
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ad019el/tamasheq-99-final
|
ad019el
| 2023-08-26T13:34:11Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:ad019el/ar_data",
"dataset:heisenberg1337/tamasheq_data",
"base_model:ad019el/tamasheq-99-final",
"base_model:finetune:ad019el/tamasheq-99-final",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-26T13:24:37Z |
---
base_model: ad019el/tamasheq-99-final
datasets:
- ad019el/ar_data
- heisenberg1337/tamasheq_data
metrics:
- cer
- wer
tags:
- generated_from_trainer
---
model-index:
- name: tamasheq-99-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tamasheq-99-final
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-arabic](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-arabic) on the None dataset.
It achieves the following results on the evaluation set:
- Cer: 16.2959
- Wer: 55.5334
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
### Training results
|step |tamasheq_wer|arabic_wer|tamasheq_cer|arabic_cer|
|------------|------------|----------|------------|----------|
|Before train|104.985 |23.1305 |67.4458 |7.30972 |
|step 300 |99.5513 |23.0544 |49.7078 |7.1043 |
|step 600 |95.1147 |22.5267 |41.4515 |6.0098 |
|step 900 |93.5194 |21.0404 |38.0867 |5.52939 |
|step 1200 |92.5723 |20.6224 |37.0877 |5.39751 |
|step 1500 |92.3009 |20.9238 |36.9915 |5.6718 |
|step 1800 |92.0738 |21.2699 |36.3713 |6.08877 |
|step 2100 |88.7338 |21.9693 |33.3648 |5.9156 |
|step 2400 |87.1884 |21.1333 |31.8379 |5.52939 |
|step 2700 |88.299 |21.0705 |31.4599 |5.5078 |
|step 3000 |87.7866 |21.5021 |30.9039 |6.29239 |
|step 3300 |84.2971 |21.666 |29.7455 |5.97212 |
|step 3600 |83.8983 |21.5732 |28.6145 |6.04748 |
|step 3900 |81.8544 |22.1087 |27.9359 |5.99096 |
|step 4200 |82.9741 |23.392 |27.4288 |6.4013 |
|step 4500 |83.8485 |24.2452 |27.0575 |6.79164 |
|step 4800 |81.6052 |22.666 |26.6918 |6.09457 |
|step 5100 |77.9661 |22.4803 |25.1084 |6.0098 |
|step 5400 |77.2183 |21.83 |24.656 |5.9156 |
|step 5700 |76.672 |22.1078 |24.2606 |6.0802 |
|step 6000 |76.2712 |22.7589 |23.9236 |6.41485 |
|step 6300 |75.7228 |23.8737 |23.7135 |6.78222 |
|step 6600 |71.2363 |23.177 |22.196 |6.39601 |
|step 6900 |69.8405 |22.7125 |21.574 |6.21703 |
|step 7200 |72.9452 |23.6679 |21.0775 |6.6918 |
|step 7500 |75.9222 |24.7097 |20.8999 |7.17784 |
|step 7800 |67.4975 |23.1305 |20.6786 |6.65034 |
|step 8100 |65.2542 |23.1305 |19.7361 |6.49962 |
|step 8400 |61.7149 |22.3874 |18.426 |6.12283 |
|step 8700 |63.8046 |23.6679 |18.2166 |6.2679 |
|step 9000 |64.7059 |24.1059 |17.9952 |6.66918 |
|step 9300 |67.5474 |24.7097 |17.6078 |7.16843 |
|step 9600 |57.1286 |23.3163 |17.2385 |6.66918 |
|step 9900 |58.2752 |22.8054 |17.1065 |6.4431 |
|step 10200 |57.7767 |24.2917 |16.848 |6.68802 |
|step 10500 |55.2841 |25.1277 |16.5033 |7.12133 |
|step 10800 |52.5424 |23.8272 |15.9566 |6.80106 |
|step 11100 |55.5334 |24.6168 |16.2959 |6.94235 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
honzatoegel/Llama-2-7b-chat-hf-gramma-check-de-en
|
honzatoegel
| 2023-08-26T13:32:53Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-17T14:07:24Z |
---
library_name: peft
---
## Model 'quality'
As you can see from the example bellow the model is almost correctly respecting the learned format, but the grammatical correction explanations are wrong, it lack sense of correct German gramma.
### Input
'### User: Check the gramma of the following sentence and list all error and relevant corrections. Sentence:Mir es geht gut, danke#### AI:Correct version of the sentence:
### Output
"Mir geht es gut, danke.":
Repairs:
- Instead of "es geht" it should be "geht" - the verb "gehen" is in the third person singular.
- Instead of "Mir es" it should be "Mir geht" - the verb "gehen" is in the third person singular.#### End of the list of corrections.
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
zarakiquemparte/zarafusionex-1.1-l2-7b-GGML
|
zarakiquemparte
| 2023-08-26T13:31:39Z | 0 | 0 | null |
[
"llama2",
"license:other",
"region:us"
] | null | 2023-08-25T00:19:49Z |
---
license: other
tags:
- llama2
---
Quantized GGML of [Zarafusionex 1.1 L2 7b](https://huggingface.co/zarakiquemparte/zarafusionex-1.1-l2-7b)
If you need other quantized models use @TheBloke:
- [GGML](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGML)
- [GGUF](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF)
- [GPTQ](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GPTQ)
|
zarakiquemparte/zarafusionex-1.1-l2-7b
|
zarakiquemparte
| 2023-08-26T13:30:28Z | 1,477 | 7 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-25T00:19:12Z |
---
license: other
tags:
- llama2
---
# Model Card: Zarafusionex 1.1 L2 7b
This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (53%) as a base with [Stable Beluga 7b](https://huggingface.co/stabilityai/StableBeluga-7B) (47%) and the result of this merge was merged with [LimaRP LLama2 7B Lora version of the day 07/23/2023](https://huggingface.co/lemonilia/limarp-llama2).
This merge of models(hermes and stable beluga) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py)
This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
Quantized Model by @TheBloke:
- [GGML](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGML)
- [GGUF](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF)
- [GPTQ](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GPTQ)
Merge illustration:

## Usage:
Since this is a merge between Nous Hermes, Stable Beluga and LimaRP, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
LimaRP instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
This model is not intended for supplying factual information or advice in any form
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
|
Andrei-Alex/Fine-Tune-Adapters
|
Andrei-Alex
| 2023-08-26T13:20:37Z | 13 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:quantized:meta-llama/Llama-2-7b-chat-hf",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-08-22T13:26:47Z |
---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: Fine-Tune-Adapters
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine-Tune-Adapters
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nabos/falcon-7b-finetune
|
nabos
| 2023-08-26T13:17:51Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:finetune:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2023-08-26T12:22:29Z |
---
base_model: ybelkada/falcon-7b-sharded-bf16
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-finetune
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Onno/hotels_classifier
|
Onno
| 2023-08-26T13:13:07Z | 64 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-14T15:11:47Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: Onno/hotels_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Onno/hotels_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4492
- Validation Loss: 0.5853
- Train Accuracy: 0.6548
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 5025, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6757 | 0.6910 | 0.5119 | 0 |
| 0.6569 | 0.6739 | 0.5357 | 1 |
| 0.6395 | 0.6663 | 0.5357 | 2 |
| 0.6161 | 0.6465 | 0.6071 | 3 |
| 0.5919 | 0.6299 | 0.6548 | 4 |
| 0.5801 | 0.6173 | 0.6429 | 5 |
| 0.5518 | 0.6039 | 0.6310 | 6 |
| 0.5414 | 0.6205 | 0.6905 | 7 |
| 0.5181 | 0.6138 | 0.6548 | 8 |
| 0.4902 | 0.6300 | 0.6667 | 9 |
| 0.4824 | 0.6672 | 0.6667 | 10 |
| 0.4493 | 0.6038 | 0.6071 | 11 |
| 0.4287 | 0.6329 | 0.6667 | 12 |
| 0.4668 | 0.6371 | 0.6548 | 13 |
| 0.4492 | 0.5853 | 0.6548 | 14 |
### Framework versions
- Transformers 4.32.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ad019el/tamasheq-99-new-data
|
ad019el
| 2023-08-26T12:52:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:ad019el/tamasheq-99-final",
"base_model:finetune:ad019el/tamasheq-99-final",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-26T02:00:01Z |
---
base_model: ad019el/tamasheq-99-final
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: tamasheq-99-new-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tamasheq-99-new-data
This model is a fine-tuned version of [ad019el/tamasheq-99-final](https://huggingface.co/ad019el/tamasheq-99-final) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4502
- Wer: 0.5910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.4543 | 9.8 | 500 | 0.8448 | 0.7354 |
| 0.3588 | 19.61 | 1000 | 0.4527 | 0.6020 |
| 0.2012 | 29.41 | 1500 | 0.4490 | 0.5950 |
| 0.1739 | 39.22 | 2000 | 0.4547 | 0.5950 |
| 0.1634 | 49.02 | 2500 | 0.4502 | 0.5910 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dt-and-vanilla-ardt/ardt-vanilla-arrl_sgld_train_walker2d_high-2608_1258-99
|
dt-and-vanilla-ardt
| 2023-08-26T12:50:17Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T11:59:38Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-arrl_sgld_train_walker2d_high-2608_1258-99
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-arrl_sgld_train_walker2d_high-2608_1258-99
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
FredericProtat/ppo-PyramidsTraining
|
FredericProtat
| 2023-08-26T12:49:50Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-23T13:16:16Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: FredericProtat/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DigitalUmuganda/quantized_finetuned_edu_en_kin
|
DigitalUmuganda
| 2023-08-26T12:29:40Z | 2 | 0 |
transformers
|
[
"transformers",
"translation",
"education",
"rw",
"en",
"license:cc",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-25T10:50:21Z |
---
license: cc
language:
- rw
- en
metrics:
- bleu
pipeline_tag: translation
tags:
- translation
- education
---
|
lighttransport/japanese-scoring-model
|
lighttransport
| 2023-08-26T12:22:00Z | 0 | 1 | null |
[
"scoring",
"ja",
"license:odc-by",
"region:us"
] | null | 2023-08-09T09:46:09Z |
---
language:
- ja
tags:
- scoring
license: odc-by
---
## 日本語品質スコアリングモデル
現在は KenLM モデルのみ提供されています.
## KenLM model
- kenlm_model-wiki-nfkc-char.bin
Wikipedia データセットに対して, NFKC 正規化を行い, 文字単位で train したもの.
- kenlm_model-wiki-nfkc-wakachi.bin
Wikipedia データセットに対して, NFKC 正規化を行い, Fugashi で分かち書きして train したもの.
9 GB ほどあります.
### 利用例
文字単位の場合.
必要に応じて `unicodedata.normalize` などで入力文章を NFKC 正規化ください.
```py
import kenlm
import os
MODEL_BIN='kenlm_model-wiki-nfkc-char.bin'
if __name__ == '__main__':
if not os.path.exists(MODEL_BIN):
raise Exception("model file not found: {}".format(MODEL_BIN))
model = kenlm.LanguageModel(MODEL_BIN)
for txt in [
"脱字が存在する文章です。",
"脱字が存在する文章す。",
'東京はッ晴れ。',
'東京は元気です。',
'吾輩は猫である。 名前はまだない。',
'吾輩は猫である。 名前はまだな。',
'東京は晴れ',
'東京は晴れ。'
]:
sentence = " ".join(txt.strip())
prob = model.score(sentence, bos=True, eos=True)
perplexity = model.perplexity(sentence)
print(perplexity, prob, txt)
```
```
43.35517516360913 -21.281532287597656 脱字が存在する文章です。
97.87160125641132 -23.887880325317383 脱字が存在する文章す。
436.3376833313477 -21.118581771850586 東京はッ晴れ。
28.211570751481222 -13.053845405578613 東京は元気です。
10.25990652099858 -17.189437866210938 吾輩は猫である。 名前はまだない。
18.742658903324944 -20.365299224853516 吾輩は猫である。 名前はまだな。
1707.9430028946922 -19.394840240478516 東京は晴れ
62.91522904283418 -12.591290473937988 東京は晴れ。
```
分かち書きする場合. 分かち書き処理には, SudachiPy など利用でもよいでしょう.
必要に応じて `unicodedata.normalize` などで入力文章を NFKC 正規化ください.
```py
import kenlm
import os
from fugashi import Tagger
MODEL_BIN='kenlm_model-wiki-nfkc-wakachi.bin'
tagger = Tagger('-Owakati')
if __name__ == '__main__':
if not os.path.exists(MODEL_BIN):
raise Exception("model file not found: {}".format(MODEL_BIN))
model = kenlm.LanguageModel(MODEL_BIN)
# 句点ごとの文に対してスコア計算が理想である
for txt in [
"脱字が存在する文章です。",
"脱字が存在する文章す。",
'東京はッ晴れ。',
'東京は元気です。',
'吾輩は猫である。 名前はまだない。',
'吾輩は猫である。 名前はまだな。',
'東京は晴れ',
'東京は晴れ。'
]:
sentence = tagger.parse(txt.strip())
prob = model.score(sentence, bos=True, eos=True)
perplexity = model.perplexity(sentence)
print(perplexity, prob, txt)
```
```
799.5157517342569 -23.22261619567871 脱字が存在する文章です。
1427.360337285063 -25.236268997192383 脱字が存在する文章す。
3103.9820393600435 -20.951515197753906 東京はッ晴れ。
186.32902872137998 -13.621683120727539 東京は元気です。
25.350235809904472 -16.8477840423584 吾輩は猫である。 名前はまだない。
113.43313945517427 -24.656879425048828 吾輩は猫である。 名前はまだな。
17985.3170652363 -17.019672393798828 東京は晴れ
354.6946680891273 -12.749273300170898 東京は晴れ。
```
## License
odc-by
|
kejolong/newol
|
kejolong
| 2023-08-26T12:19:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-26T12:08:13Z |
---
license: creativeml-openrail-m
---
|
bigmorning/whisper_char_cv12_pad_lob100_low__0090
|
bigmorning
| 2023-08-26T12:09:58Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-26T12:09:50Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_char_cv12_pad_lob100_low__0090
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_char_cv12_pad_lob100_low__0090
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0004
- Train Accuracy: 0.1115
- Train Wermet: 3.3972
- Validation Loss: 0.5582
- Validation Accuracy: 0.0640
- Validation Wermet: 8.5953
- Epoch: 89
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.3330 | 0.0999 | 1.7359 | 0.3779 | 0.0615 | 4.7471 | 0 |
| 0.3093 | 0.1007 | 2.0563 | 0.3652 | 0.0618 | 7.2181 | 1 |
| 0.2869 | 0.1015 | 2.0654 | 0.3539 | 0.0620 | 8.6857 | 2 |
| 0.2672 | 0.1022 | 2.1925 | 0.3443 | 0.0623 | 8.0906 | 3 |
| 0.2488 | 0.1028 | 2.3286 | 0.3305 | 0.0626 | 9.1756 | 4 |
| 0.2316 | 0.1034 | 2.4212 | 0.3300 | 0.0626 | 8.1427 | 5 |
| 0.2163 | 0.1039 | 2.5012 | 0.3183 | 0.0629 | 8.3043 | 6 |
| 0.2018 | 0.1045 | 2.7267 | 0.3109 | 0.0631 | 9.5329 | 7 |
| 0.1878 | 0.1050 | 2.7034 | 0.3053 | 0.0632 | 7.9014 | 8 |
| 0.1749 | 0.1054 | 2.8719 | 0.3063 | 0.0632 | 9.0257 | 9 |
| 0.1628 | 0.1058 | 2.8764 | 0.3033 | 0.0634 | 9.1336 | 10 |
| 0.1510 | 0.1063 | 2.8441 | 0.3046 | 0.0634 | 8.6064 | 11 |
| 0.1391 | 0.1067 | 2.9377 | 0.3030 | 0.0635 | 9.1326 | 12 |
| 0.1280 | 0.1071 | 2.9433 | 0.3025 | 0.0636 | 9.4533 | 13 |
| 0.1182 | 0.1075 | 3.1399 | 0.3076 | 0.0636 | 9.9836 | 14 |
| 0.1086 | 0.1078 | 3.2411 | 0.3096 | 0.0636 | 8.8470 | 15 |
| 0.0983 | 0.1082 | 3.2622 | 0.3125 | 0.0636 | 9.1506 | 16 |
| 0.0889 | 0.1086 | 3.3368 | 0.3184 | 0.0636 | 8.9635 | 17 |
| 0.0803 | 0.1089 | 3.2742 | 0.3204 | 0.0637 | 9.3550 | 18 |
| 0.0720 | 0.1092 | 3.4052 | 0.3258 | 0.0637 | 10.1082 | 19 |
| 0.0637 | 0.1096 | 3.4287 | 0.3342 | 0.0637 | 10.3977 | 20 |
| 0.0566 | 0.1098 | 3.4708 | 0.3411 | 0.0636 | 10.6479 | 21 |
| 0.0498 | 0.1101 | 3.4462 | 0.3463 | 0.0637 | 10.1602 | 22 |
| 0.0429 | 0.1104 | 3.4056 | 0.3588 | 0.0636 | 9.7172 | 23 |
| 0.0374 | 0.1106 | 3.4477 | 0.3656 | 0.0636 | 9.4476 | 24 |
| 0.0325 | 0.1108 | 3.4474 | 0.3712 | 0.0637 | 9.6926 | 25 |
| 0.0279 | 0.1109 | 3.4263 | 0.3836 | 0.0636 | 10.0768 | 26 |
| 0.0233 | 0.1111 | 3.4779 | 0.3873 | 0.0637 | 9.8123 | 27 |
| 0.0196 | 0.1112 | 3.5329 | 0.4015 | 0.0636 | 10.0477 | 28 |
| 0.0160 | 0.1113 | 3.5049 | 0.4097 | 0.0636 | 10.4027 | 29 |
| 0.0139 | 0.1114 | 3.6185 | 0.4201 | 0.0636 | 10.9904 | 30 |
| 0.0112 | 0.1114 | 3.5812 | 0.4300 | 0.0636 | 10.4501 | 31 |
| 0.0096 | 0.1115 | 3.7493 | 0.4409 | 0.0636 | 10.3964 | 32 |
| 0.0089 | 0.1115 | 3.6912 | 0.4499 | 0.0636 | 10.8345 | 33 |
| 0.0082 | 0.1115 | 3.7577 | 0.4583 | 0.0636 | 10.2883 | 34 |
| 0.0090 | 0.1114 | 3.8468 | 0.4755 | 0.0635 | 11.8086 | 35 |
| 0.0168 | 0.1111 | 3.6340 | 0.4592 | 0.0636 | 10.6373 | 36 |
| 0.0072 | 0.1115 | 3.8163 | 0.4644 | 0.0637 | 10.2448 | 37 |
| 0.0040 | 0.1115 | 3.8376 | 0.4728 | 0.0637 | 10.9074 | 38 |
| 0.0029 | 0.1115 | 3.8274 | 0.4814 | 0.0637 | 10.5440 | 39 |
| 0.0025 | 0.1115 | 3.8022 | 0.4891 | 0.0637 | 10.8606 | 40 |
| 0.0021 | 0.1115 | 3.8940 | 0.4937 | 0.0637 | 10.9388 | 41 |
| 0.0018 | 0.1115 | 3.8026 | 0.5030 | 0.0637 | 10.6511 | 42 |
| 0.0014 | 0.1115 | 3.8260 | 0.5092 | 0.0637 | 10.5743 | 43 |
| 0.0173 | 0.1110 | 3.6223 | 0.5066 | 0.0635 | 9.9370 | 44 |
| 0.0073 | 0.1114 | 3.6868 | 0.4972 | 0.0637 | 10.6775 | 45 |
| 0.0027 | 0.1115 | 3.6742 | 0.5025 | 0.0638 | 10.3476 | 46 |
| 0.0016 | 0.1115 | 3.7677 | 0.5078 | 0.0638 | 10.2277 | 47 |
| 0.0013 | 0.1115 | 3.7721 | 0.5131 | 0.0638 | 10.4473 | 48 |
| 0.0011 | 0.1115 | 3.8394 | 0.5189 | 0.0638 | 10.4344 | 49 |
| 0.0009 | 0.1116 | 3.8666 | 0.5245 | 0.0638 | 10.4933 | 50 |
| 0.0008 | 0.1116 | 3.8432 | 0.5307 | 0.0638 | 10.5118 | 51 |
| 0.0008 | 0.1115 | 3.8808 | 0.5391 | 0.0637 | 10.7086 | 52 |
| 0.0207 | 0.1108 | 3.8324 | 0.5204 | 0.0636 | 9.3724 | 53 |
| 0.0074 | 0.1113 | 3.4605 | 0.5254 | 0.0637 | 10.1335 | 54 |
| 0.0023 | 0.1115 | 3.6304 | 0.5164 | 0.0639 | 10.2554 | 55 |
| 0.0012 | 0.1115 | 3.7309 | 0.5202 | 0.0639 | 10.3892 | 56 |
| 0.0009 | 0.1115 | 3.6945 | 0.5260 | 0.0639 | 10.0808 | 57 |
| 0.0007 | 0.1116 | 3.6804 | 0.5308 | 0.0639 | 10.2385 | 58 |
| 0.0006 | 0.1116 | 3.6696 | 0.5350 | 0.0639 | 10.1248 | 59 |
| 0.0005 | 0.1116 | 3.7425 | 0.5394 | 0.0639 | 10.1711 | 60 |
| 0.0005 | 0.1116 | 3.7317 | 0.5442 | 0.0639 | 10.1407 | 61 |
| 0.0004 | 0.1116 | 3.7010 | 0.5490 | 0.0639 | 10.0544 | 62 |
| 0.0004 | 0.1116 | 3.6921 | 0.5546 | 0.0639 | 10.1746 | 63 |
| 0.0003 | 0.1116 | 3.7494 | 0.5598 | 0.0639 | 10.0562 | 64 |
| 0.0025 | 0.1115 | 3.6924 | 0.6395 | 0.0628 | 8.8622 | 65 |
| 0.0189 | 0.1109 | 3.7101 | 0.5363 | 0.0638 | 11.1245 | 66 |
| 0.0035 | 0.1115 | 3.6989 | 0.5347 | 0.0639 | 11.3329 | 67 |
| 0.0012 | 0.1115 | 3.6723 | 0.5407 | 0.0639 | 11.2559 | 68 |
| 0.0007 | 0.1115 | 3.6834 | 0.5429 | 0.0639 | 11.0248 | 69 |
| 0.0006 | 0.1115 | 3.6848 | 0.5459 | 0.0639 | 10.8372 | 70 |
| 0.0005 | 0.1115 | 3.6407 | 0.5501 | 0.0639 | 10.9252 | 71 |
| 0.0005 | 0.1115 | 3.7172 | 0.5565 | 0.0639 | 10.6965 | 72 |
| 0.0123 | 0.1112 | 3.5604 | 0.5734 | 0.0635 | 10.3309 | 73 |
| 0.0075 | 0.1113 | 3.5938 | 0.5416 | 0.0639 | 10.3651 | 74 |
| 0.0015 | 0.1115 | 3.4921 | 0.5406 | 0.0640 | 10.1754 | 75 |
| 0.0007 | 0.1115 | 3.4911 | 0.5445 | 0.0640 | 10.0699 | 76 |
| 0.0004 | 0.1116 | 3.4728 | 0.5477 | 0.0640 | 10.1247 | 77 |
| 0.0004 | 0.1116 | 3.4452 | 0.5517 | 0.0640 | 9.6791 | 78 |
| 0.0003 | 0.1116 | 3.4331 | 0.5558 | 0.0640 | 9.7928 | 79 |
| 0.0003 | 0.1116 | 3.4313 | 0.5595 | 0.0640 | 9.6406 | 80 |
| 0.0003 | 0.1116 | 3.4541 | 0.5627 | 0.0640 | 9.7750 | 81 |
| 0.0002 | 0.1116 | 3.4371 | 0.5666 | 0.0640 | 9.5143 | 82 |
| 0.0002 | 0.1116 | 3.4361 | 0.5705 | 0.0640 | 9.8916 | 83 |
| 0.0002 | 0.1116 | 3.4777 | 0.5732 | 0.0640 | 9.6047 | 84 |
| 0.0153 | 0.1110 | 3.6428 | 0.5509 | 0.0638 | 8.4998 | 85 |
| 0.0038 | 0.1115 | 3.4999 | 0.5538 | 0.0639 | 9.5196 | 86 |
| 0.0015 | 0.1115 | 3.5174 | 0.5506 | 0.0640 | 9.0468 | 87 |
| 0.0006 | 0.1115 | 3.5053 | 0.5561 | 0.0640 | 8.6693 | 88 |
| 0.0004 | 0.1115 | 3.3972 | 0.5582 | 0.0640 | 8.5953 | 89 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
JoyboyXoXo/Taxi-V3
|
JoyboyXoXo
| 2023-08-26T12:09:32Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-26T12:09:30Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JoyboyXoXo/Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Dmitriy/whisper-small-hi
|
Dmitriy
| 2023-08-26T12:03:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-01T12:55:33Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
dt-and-vanilla-ardt/ardt-vanilla-arrl_sgld_train_walker2d_high-2608_1206-66
|
dt-and-vanilla-ardt
| 2023-08-26T11:57:56Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T11:07:43Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-arrl_sgld_train_walker2d_high-2608_1206-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-arrl_sgld_train_walker2d_high-2608_1206-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Ajohe/vit-base-patch16-224-in21k-finetuned-lora-food101
|
Ajohe
| 2023-08-26T11:57:12Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"region:us"
] | null | 2023-08-26T11:45:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-lora-food101
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-lora-food101
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0403
- Accuracy: 0.9937
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5326 | 0.99 | 44 | 0.1454 | 0.9716 |
| 0.4211 | 2.0 | 89 | 0.0694 | 0.9811 |
| 0.3062 | 2.99 | 133 | 0.0403 | 0.9937 |
| 0.2785 | 4.0 | 178 | 0.0374 | 0.9937 |
| 0.206 | 4.94 | 220 | 0.0336 | 0.9937 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LawChat-tw/llama2-SFT
|
LawChat-tw
| 2023-08-26T11:54:55Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T11:53:34Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
LawChat-tw/llama2-PT
|
LawChat-tw
| 2023-08-26T11:53:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T11:50:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
### Framework versions
- PEFT 0.5.0
|
bigmorning/whisper_char_cv12_pad_lob100_low__0080
|
bigmorning
| 2023-08-26T11:43:37Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-26T11:43:29Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_char_cv12_pad_lob100_low__0080
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_char_cv12_pad_lob100_low__0080
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0003
- Train Accuracy: 0.1116
- Train Wermet: 3.4331
- Validation Loss: 0.5558
- Validation Accuracy: 0.0640
- Validation Wermet: 9.7928
- Epoch: 79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.3330 | 0.0999 | 1.7359 | 0.3779 | 0.0615 | 4.7471 | 0 |
| 0.3093 | 0.1007 | 2.0563 | 0.3652 | 0.0618 | 7.2181 | 1 |
| 0.2869 | 0.1015 | 2.0654 | 0.3539 | 0.0620 | 8.6857 | 2 |
| 0.2672 | 0.1022 | 2.1925 | 0.3443 | 0.0623 | 8.0906 | 3 |
| 0.2488 | 0.1028 | 2.3286 | 0.3305 | 0.0626 | 9.1756 | 4 |
| 0.2316 | 0.1034 | 2.4212 | 0.3300 | 0.0626 | 8.1427 | 5 |
| 0.2163 | 0.1039 | 2.5012 | 0.3183 | 0.0629 | 8.3043 | 6 |
| 0.2018 | 0.1045 | 2.7267 | 0.3109 | 0.0631 | 9.5329 | 7 |
| 0.1878 | 0.1050 | 2.7034 | 0.3053 | 0.0632 | 7.9014 | 8 |
| 0.1749 | 0.1054 | 2.8719 | 0.3063 | 0.0632 | 9.0257 | 9 |
| 0.1628 | 0.1058 | 2.8764 | 0.3033 | 0.0634 | 9.1336 | 10 |
| 0.1510 | 0.1063 | 2.8441 | 0.3046 | 0.0634 | 8.6064 | 11 |
| 0.1391 | 0.1067 | 2.9377 | 0.3030 | 0.0635 | 9.1326 | 12 |
| 0.1280 | 0.1071 | 2.9433 | 0.3025 | 0.0636 | 9.4533 | 13 |
| 0.1182 | 0.1075 | 3.1399 | 0.3076 | 0.0636 | 9.9836 | 14 |
| 0.1086 | 0.1078 | 3.2411 | 0.3096 | 0.0636 | 8.8470 | 15 |
| 0.0983 | 0.1082 | 3.2622 | 0.3125 | 0.0636 | 9.1506 | 16 |
| 0.0889 | 0.1086 | 3.3368 | 0.3184 | 0.0636 | 8.9635 | 17 |
| 0.0803 | 0.1089 | 3.2742 | 0.3204 | 0.0637 | 9.3550 | 18 |
| 0.0720 | 0.1092 | 3.4052 | 0.3258 | 0.0637 | 10.1082 | 19 |
| 0.0637 | 0.1096 | 3.4287 | 0.3342 | 0.0637 | 10.3977 | 20 |
| 0.0566 | 0.1098 | 3.4708 | 0.3411 | 0.0636 | 10.6479 | 21 |
| 0.0498 | 0.1101 | 3.4462 | 0.3463 | 0.0637 | 10.1602 | 22 |
| 0.0429 | 0.1104 | 3.4056 | 0.3588 | 0.0636 | 9.7172 | 23 |
| 0.0374 | 0.1106 | 3.4477 | 0.3656 | 0.0636 | 9.4476 | 24 |
| 0.0325 | 0.1108 | 3.4474 | 0.3712 | 0.0637 | 9.6926 | 25 |
| 0.0279 | 0.1109 | 3.4263 | 0.3836 | 0.0636 | 10.0768 | 26 |
| 0.0233 | 0.1111 | 3.4779 | 0.3873 | 0.0637 | 9.8123 | 27 |
| 0.0196 | 0.1112 | 3.5329 | 0.4015 | 0.0636 | 10.0477 | 28 |
| 0.0160 | 0.1113 | 3.5049 | 0.4097 | 0.0636 | 10.4027 | 29 |
| 0.0139 | 0.1114 | 3.6185 | 0.4201 | 0.0636 | 10.9904 | 30 |
| 0.0112 | 0.1114 | 3.5812 | 0.4300 | 0.0636 | 10.4501 | 31 |
| 0.0096 | 0.1115 | 3.7493 | 0.4409 | 0.0636 | 10.3964 | 32 |
| 0.0089 | 0.1115 | 3.6912 | 0.4499 | 0.0636 | 10.8345 | 33 |
| 0.0082 | 0.1115 | 3.7577 | 0.4583 | 0.0636 | 10.2883 | 34 |
| 0.0090 | 0.1114 | 3.8468 | 0.4755 | 0.0635 | 11.8086 | 35 |
| 0.0168 | 0.1111 | 3.6340 | 0.4592 | 0.0636 | 10.6373 | 36 |
| 0.0072 | 0.1115 | 3.8163 | 0.4644 | 0.0637 | 10.2448 | 37 |
| 0.0040 | 0.1115 | 3.8376 | 0.4728 | 0.0637 | 10.9074 | 38 |
| 0.0029 | 0.1115 | 3.8274 | 0.4814 | 0.0637 | 10.5440 | 39 |
| 0.0025 | 0.1115 | 3.8022 | 0.4891 | 0.0637 | 10.8606 | 40 |
| 0.0021 | 0.1115 | 3.8940 | 0.4937 | 0.0637 | 10.9388 | 41 |
| 0.0018 | 0.1115 | 3.8026 | 0.5030 | 0.0637 | 10.6511 | 42 |
| 0.0014 | 0.1115 | 3.8260 | 0.5092 | 0.0637 | 10.5743 | 43 |
| 0.0173 | 0.1110 | 3.6223 | 0.5066 | 0.0635 | 9.9370 | 44 |
| 0.0073 | 0.1114 | 3.6868 | 0.4972 | 0.0637 | 10.6775 | 45 |
| 0.0027 | 0.1115 | 3.6742 | 0.5025 | 0.0638 | 10.3476 | 46 |
| 0.0016 | 0.1115 | 3.7677 | 0.5078 | 0.0638 | 10.2277 | 47 |
| 0.0013 | 0.1115 | 3.7721 | 0.5131 | 0.0638 | 10.4473 | 48 |
| 0.0011 | 0.1115 | 3.8394 | 0.5189 | 0.0638 | 10.4344 | 49 |
| 0.0009 | 0.1116 | 3.8666 | 0.5245 | 0.0638 | 10.4933 | 50 |
| 0.0008 | 0.1116 | 3.8432 | 0.5307 | 0.0638 | 10.5118 | 51 |
| 0.0008 | 0.1115 | 3.8808 | 0.5391 | 0.0637 | 10.7086 | 52 |
| 0.0207 | 0.1108 | 3.8324 | 0.5204 | 0.0636 | 9.3724 | 53 |
| 0.0074 | 0.1113 | 3.4605 | 0.5254 | 0.0637 | 10.1335 | 54 |
| 0.0023 | 0.1115 | 3.6304 | 0.5164 | 0.0639 | 10.2554 | 55 |
| 0.0012 | 0.1115 | 3.7309 | 0.5202 | 0.0639 | 10.3892 | 56 |
| 0.0009 | 0.1115 | 3.6945 | 0.5260 | 0.0639 | 10.0808 | 57 |
| 0.0007 | 0.1116 | 3.6804 | 0.5308 | 0.0639 | 10.2385 | 58 |
| 0.0006 | 0.1116 | 3.6696 | 0.5350 | 0.0639 | 10.1248 | 59 |
| 0.0005 | 0.1116 | 3.7425 | 0.5394 | 0.0639 | 10.1711 | 60 |
| 0.0005 | 0.1116 | 3.7317 | 0.5442 | 0.0639 | 10.1407 | 61 |
| 0.0004 | 0.1116 | 3.7010 | 0.5490 | 0.0639 | 10.0544 | 62 |
| 0.0004 | 0.1116 | 3.6921 | 0.5546 | 0.0639 | 10.1746 | 63 |
| 0.0003 | 0.1116 | 3.7494 | 0.5598 | 0.0639 | 10.0562 | 64 |
| 0.0025 | 0.1115 | 3.6924 | 0.6395 | 0.0628 | 8.8622 | 65 |
| 0.0189 | 0.1109 | 3.7101 | 0.5363 | 0.0638 | 11.1245 | 66 |
| 0.0035 | 0.1115 | 3.6989 | 0.5347 | 0.0639 | 11.3329 | 67 |
| 0.0012 | 0.1115 | 3.6723 | 0.5407 | 0.0639 | 11.2559 | 68 |
| 0.0007 | 0.1115 | 3.6834 | 0.5429 | 0.0639 | 11.0248 | 69 |
| 0.0006 | 0.1115 | 3.6848 | 0.5459 | 0.0639 | 10.8372 | 70 |
| 0.0005 | 0.1115 | 3.6407 | 0.5501 | 0.0639 | 10.9252 | 71 |
| 0.0005 | 0.1115 | 3.7172 | 0.5565 | 0.0639 | 10.6965 | 72 |
| 0.0123 | 0.1112 | 3.5604 | 0.5734 | 0.0635 | 10.3309 | 73 |
| 0.0075 | 0.1113 | 3.5938 | 0.5416 | 0.0639 | 10.3651 | 74 |
| 0.0015 | 0.1115 | 3.4921 | 0.5406 | 0.0640 | 10.1754 | 75 |
| 0.0007 | 0.1115 | 3.4911 | 0.5445 | 0.0640 | 10.0699 | 76 |
| 0.0004 | 0.1116 | 3.4728 | 0.5477 | 0.0640 | 10.1247 | 77 |
| 0.0004 | 0.1116 | 3.4452 | 0.5517 | 0.0640 | 9.6791 | 78 |
| 0.0003 | 0.1116 | 3.4331 | 0.5558 | 0.0640 | 9.7928 | 79 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
aigrils2/beautifulv6-32fp-with-ema
|
aigrils2
| 2023-08-26T11:30:29Z | 18 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-13T02:55:39Z |
---
license: openrail
pipeline_tag: text-to-image
---
Attempt to convert with ema
Give a like if it's convenient
|
bigmorning/whisper_char_cv12_pad_lob100_low__0075
|
bigmorning
| 2023-08-26T11:30:26Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-26T11:30:18Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_char_cv12_pad_lob100_low__0075
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_char_cv12_pad_lob100_low__0075
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0075
- Train Accuracy: 0.1113
- Train Wermet: 3.5938
- Validation Loss: 0.5416
- Validation Accuracy: 0.0639
- Validation Wermet: 10.3651
- Epoch: 74
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.3330 | 0.0999 | 1.7359 | 0.3779 | 0.0615 | 4.7471 | 0 |
| 0.3093 | 0.1007 | 2.0563 | 0.3652 | 0.0618 | 7.2181 | 1 |
| 0.2869 | 0.1015 | 2.0654 | 0.3539 | 0.0620 | 8.6857 | 2 |
| 0.2672 | 0.1022 | 2.1925 | 0.3443 | 0.0623 | 8.0906 | 3 |
| 0.2488 | 0.1028 | 2.3286 | 0.3305 | 0.0626 | 9.1756 | 4 |
| 0.2316 | 0.1034 | 2.4212 | 0.3300 | 0.0626 | 8.1427 | 5 |
| 0.2163 | 0.1039 | 2.5012 | 0.3183 | 0.0629 | 8.3043 | 6 |
| 0.2018 | 0.1045 | 2.7267 | 0.3109 | 0.0631 | 9.5329 | 7 |
| 0.1878 | 0.1050 | 2.7034 | 0.3053 | 0.0632 | 7.9014 | 8 |
| 0.1749 | 0.1054 | 2.8719 | 0.3063 | 0.0632 | 9.0257 | 9 |
| 0.1628 | 0.1058 | 2.8764 | 0.3033 | 0.0634 | 9.1336 | 10 |
| 0.1510 | 0.1063 | 2.8441 | 0.3046 | 0.0634 | 8.6064 | 11 |
| 0.1391 | 0.1067 | 2.9377 | 0.3030 | 0.0635 | 9.1326 | 12 |
| 0.1280 | 0.1071 | 2.9433 | 0.3025 | 0.0636 | 9.4533 | 13 |
| 0.1182 | 0.1075 | 3.1399 | 0.3076 | 0.0636 | 9.9836 | 14 |
| 0.1086 | 0.1078 | 3.2411 | 0.3096 | 0.0636 | 8.8470 | 15 |
| 0.0983 | 0.1082 | 3.2622 | 0.3125 | 0.0636 | 9.1506 | 16 |
| 0.0889 | 0.1086 | 3.3368 | 0.3184 | 0.0636 | 8.9635 | 17 |
| 0.0803 | 0.1089 | 3.2742 | 0.3204 | 0.0637 | 9.3550 | 18 |
| 0.0720 | 0.1092 | 3.4052 | 0.3258 | 0.0637 | 10.1082 | 19 |
| 0.0637 | 0.1096 | 3.4287 | 0.3342 | 0.0637 | 10.3977 | 20 |
| 0.0566 | 0.1098 | 3.4708 | 0.3411 | 0.0636 | 10.6479 | 21 |
| 0.0498 | 0.1101 | 3.4462 | 0.3463 | 0.0637 | 10.1602 | 22 |
| 0.0429 | 0.1104 | 3.4056 | 0.3588 | 0.0636 | 9.7172 | 23 |
| 0.0374 | 0.1106 | 3.4477 | 0.3656 | 0.0636 | 9.4476 | 24 |
| 0.0325 | 0.1108 | 3.4474 | 0.3712 | 0.0637 | 9.6926 | 25 |
| 0.0279 | 0.1109 | 3.4263 | 0.3836 | 0.0636 | 10.0768 | 26 |
| 0.0233 | 0.1111 | 3.4779 | 0.3873 | 0.0637 | 9.8123 | 27 |
| 0.0196 | 0.1112 | 3.5329 | 0.4015 | 0.0636 | 10.0477 | 28 |
| 0.0160 | 0.1113 | 3.5049 | 0.4097 | 0.0636 | 10.4027 | 29 |
| 0.0139 | 0.1114 | 3.6185 | 0.4201 | 0.0636 | 10.9904 | 30 |
| 0.0112 | 0.1114 | 3.5812 | 0.4300 | 0.0636 | 10.4501 | 31 |
| 0.0096 | 0.1115 | 3.7493 | 0.4409 | 0.0636 | 10.3964 | 32 |
| 0.0089 | 0.1115 | 3.6912 | 0.4499 | 0.0636 | 10.8345 | 33 |
| 0.0082 | 0.1115 | 3.7577 | 0.4583 | 0.0636 | 10.2883 | 34 |
| 0.0090 | 0.1114 | 3.8468 | 0.4755 | 0.0635 | 11.8086 | 35 |
| 0.0168 | 0.1111 | 3.6340 | 0.4592 | 0.0636 | 10.6373 | 36 |
| 0.0072 | 0.1115 | 3.8163 | 0.4644 | 0.0637 | 10.2448 | 37 |
| 0.0040 | 0.1115 | 3.8376 | 0.4728 | 0.0637 | 10.9074 | 38 |
| 0.0029 | 0.1115 | 3.8274 | 0.4814 | 0.0637 | 10.5440 | 39 |
| 0.0025 | 0.1115 | 3.8022 | 0.4891 | 0.0637 | 10.8606 | 40 |
| 0.0021 | 0.1115 | 3.8940 | 0.4937 | 0.0637 | 10.9388 | 41 |
| 0.0018 | 0.1115 | 3.8026 | 0.5030 | 0.0637 | 10.6511 | 42 |
| 0.0014 | 0.1115 | 3.8260 | 0.5092 | 0.0637 | 10.5743 | 43 |
| 0.0173 | 0.1110 | 3.6223 | 0.5066 | 0.0635 | 9.9370 | 44 |
| 0.0073 | 0.1114 | 3.6868 | 0.4972 | 0.0637 | 10.6775 | 45 |
| 0.0027 | 0.1115 | 3.6742 | 0.5025 | 0.0638 | 10.3476 | 46 |
| 0.0016 | 0.1115 | 3.7677 | 0.5078 | 0.0638 | 10.2277 | 47 |
| 0.0013 | 0.1115 | 3.7721 | 0.5131 | 0.0638 | 10.4473 | 48 |
| 0.0011 | 0.1115 | 3.8394 | 0.5189 | 0.0638 | 10.4344 | 49 |
| 0.0009 | 0.1116 | 3.8666 | 0.5245 | 0.0638 | 10.4933 | 50 |
| 0.0008 | 0.1116 | 3.8432 | 0.5307 | 0.0638 | 10.5118 | 51 |
| 0.0008 | 0.1115 | 3.8808 | 0.5391 | 0.0637 | 10.7086 | 52 |
| 0.0207 | 0.1108 | 3.8324 | 0.5204 | 0.0636 | 9.3724 | 53 |
| 0.0074 | 0.1113 | 3.4605 | 0.5254 | 0.0637 | 10.1335 | 54 |
| 0.0023 | 0.1115 | 3.6304 | 0.5164 | 0.0639 | 10.2554 | 55 |
| 0.0012 | 0.1115 | 3.7309 | 0.5202 | 0.0639 | 10.3892 | 56 |
| 0.0009 | 0.1115 | 3.6945 | 0.5260 | 0.0639 | 10.0808 | 57 |
| 0.0007 | 0.1116 | 3.6804 | 0.5308 | 0.0639 | 10.2385 | 58 |
| 0.0006 | 0.1116 | 3.6696 | 0.5350 | 0.0639 | 10.1248 | 59 |
| 0.0005 | 0.1116 | 3.7425 | 0.5394 | 0.0639 | 10.1711 | 60 |
| 0.0005 | 0.1116 | 3.7317 | 0.5442 | 0.0639 | 10.1407 | 61 |
| 0.0004 | 0.1116 | 3.7010 | 0.5490 | 0.0639 | 10.0544 | 62 |
| 0.0004 | 0.1116 | 3.6921 | 0.5546 | 0.0639 | 10.1746 | 63 |
| 0.0003 | 0.1116 | 3.7494 | 0.5598 | 0.0639 | 10.0562 | 64 |
| 0.0025 | 0.1115 | 3.6924 | 0.6395 | 0.0628 | 8.8622 | 65 |
| 0.0189 | 0.1109 | 3.7101 | 0.5363 | 0.0638 | 11.1245 | 66 |
| 0.0035 | 0.1115 | 3.6989 | 0.5347 | 0.0639 | 11.3329 | 67 |
| 0.0012 | 0.1115 | 3.6723 | 0.5407 | 0.0639 | 11.2559 | 68 |
| 0.0007 | 0.1115 | 3.6834 | 0.5429 | 0.0639 | 11.0248 | 69 |
| 0.0006 | 0.1115 | 3.6848 | 0.5459 | 0.0639 | 10.8372 | 70 |
| 0.0005 | 0.1115 | 3.6407 | 0.5501 | 0.0639 | 10.9252 | 71 |
| 0.0005 | 0.1115 | 3.7172 | 0.5565 | 0.0639 | 10.6965 | 72 |
| 0.0123 | 0.1112 | 3.5604 | 0.5734 | 0.0635 | 10.3309 | 73 |
| 0.0075 | 0.1113 | 3.5938 | 0.5416 | 0.0639 | 10.3651 | 74 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
elftsdmr/malware-url-detect
|
elftsdmr
| 2023-08-26T11:09:23Z | 202 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-10T11:37:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: MALWARE-URL-DETECT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MALWARE-URL-DETECT
With this model, it detects harmful links created to harm people such as phishing in Turkey. Classifies url addresses as malware and benign.
Type the domain name of the url address in the text field for classification in API: Like this:
"huggingface.com"
To test the model, visit [USOM](https://www.usom.gov.tr/adres). Harmful links used in Turkey are shared up-to-date on this site.
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2122
- Accuracy: 0.945
- Precision: 0.9611
- Recall: 0.9287
- F1: 0.9446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 63 | 0.2153 | 0.921 | 0.9953 | 0.8475 | 0.9155 |
| No log | 2.0 | 126 | 0.1927 | 0.946 | 0.9669 | 0.9248 | 0.9453 |
| No log | 3.0 | 189 | 0.2122 | 0.945 | 0.9611 | 0.9287 | 0.9446 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
rishabh063/lora-trained-xl-pkt
|
rishabh063
| 2023-08-26T10:53:28Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-26T08:16:42Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of pktpkt person
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - rishabh063/lora-trained-xl-pkt
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of pktpkt person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
badokorach/bert-finetuned-squad-88
|
badokorach
| 2023-08-26T10:39:44Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:EricPeter/distilbert-base-cased-distilled-squad",
"base_model:finetune:EricPeter/distilbert-base-cased-distilled-squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-25T23:40:54Z |
---
base_model: EricPeter/distilbert-base-cased-distilled-squad
tags:
- generated_from_keras_callback
model-index:
- name: badokorach/bert-finetuned-squad-88
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# badokorach/bert-finetuned-squad-88
This model is a fine-tuned version of [EricPeter/distilbert-base-cased-distilled-squad](https://huggingface.co/EricPeter/distilbert-base-cased-distilled-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7812
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 570, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.03}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 2.6118 | 0 |
| 1.9671 | 1 |
| 1.8982 | 2 |
| 1.7812 | 3 |
### Framework versions
- Transformers 4.32.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
zarakiquemparte/zarablend-1.1-l2-7b
|
zarakiquemparte
| 2023-08-26T10:30:54Z | 1,478 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-26T02:51:59Z |
---
license: other
tags:
- llama2
---
# Model Card: Zarablend 1.1 L2 7b
This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (66%) as a base with [Airoboros L2 7B GPT4 2.0](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0) (34%) and the result of this merge was merged with [LimaRP LLama2 7B Lora version of the day 07/23/2023](https://huggingface.co/lemonilia/limarp-llama2).
This merge of models(hermes and airoboros) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py)
This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
Merge illustration:

## Usage:
Since this is a merge between Nous Hermes, Airoboros and LimaRP, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
LimaRP instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
This model is not intended for supplying factual information or advice in any form
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
|
archimedix/sdxl-archi06
|
archimedix
| 2023-08-26T10:17:55Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-08-26T10:17:53Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of Archimedix
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
Yorai/yolos-tiny_finetuned_cppe-5
|
Yorai
| 2023-08-26T10:17:19Z | 187 | 0 |
transformers
|
[
"transformers",
"pytorch",
"yolos",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"base_model:hustvl/yolos-tiny",
"base_model:finetune:hustvl/yolos-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-08-26T09:41:34Z |
---
license: apache-2.0
base_model: hustvl/yolos-tiny
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: yolos-tiny_finetuned_cppe-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yolos-tiny_finetuned_cppe-5
This model is a fine-tuned version of [hustvl/yolos-tiny](https://huggingface.co/hustvl/yolos-tiny) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dkqjrm/20230826174342
|
dkqjrm
| 2023-08-26T10:16:21Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T08:44:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826174342'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826174342
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4913
- Accuracy: 0.72
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.6276 | 0.57 |
| No log | 2.0 | 50 | 0.6136 | 0.63 |
| No log | 3.0 | 75 | 0.6774 | 0.66 |
| No log | 4.0 | 100 | 0.5964 | 0.64 |
| No log | 5.0 | 125 | 0.5316 | 0.62 |
| No log | 6.0 | 150 | 0.5231 | 0.62 |
| No log | 7.0 | 175 | 0.5156 | 0.63 |
| No log | 8.0 | 200 | 0.6216 | 0.64 |
| No log | 9.0 | 225 | 0.5013 | 0.71 |
| No log | 10.0 | 250 | 0.5734 | 0.7 |
| No log | 11.0 | 275 | 0.4683 | 0.66 |
| No log | 12.0 | 300 | 0.5333 | 0.73 |
| No log | 13.0 | 325 | 0.6740 | 0.69 |
| No log | 14.0 | 350 | 0.5185 | 0.71 |
| No log | 15.0 | 375 | 0.5031 | 0.71 |
| No log | 16.0 | 400 | 0.5398 | 0.71 |
| No log | 17.0 | 425 | 0.5246 | 0.73 |
| No log | 18.0 | 450 | 0.7414 | 0.69 |
| No log | 19.0 | 475 | 0.6817 | 0.72 |
| 0.7352 | 20.0 | 500 | 0.6656 | 0.71 |
| 0.7352 | 21.0 | 525 | 0.5839 | 0.76 |
| 0.7352 | 22.0 | 550 | 0.6626 | 0.76 |
| 0.7352 | 23.0 | 575 | 0.5017 | 0.75 |
| 0.7352 | 24.0 | 600 | 0.5168 | 0.74 |
| 0.7352 | 25.0 | 625 | 0.5912 | 0.78 |
| 0.7352 | 26.0 | 650 | 0.5596 | 0.77 |
| 0.7352 | 27.0 | 675 | 0.4884 | 0.77 |
| 0.7352 | 28.0 | 700 | 0.4738 | 0.73 |
| 0.7352 | 29.0 | 725 | 0.5052 | 0.76 |
| 0.7352 | 30.0 | 750 | 0.6163 | 0.74 |
| 0.7352 | 31.0 | 775 | 0.5824 | 0.74 |
| 0.7352 | 32.0 | 800 | 0.4995 | 0.72 |
| 0.7352 | 33.0 | 825 | 0.4936 | 0.71 |
| 0.7352 | 34.0 | 850 | 0.5464 | 0.72 |
| 0.7352 | 35.0 | 875 | 0.5164 | 0.74 |
| 0.7352 | 36.0 | 900 | 0.5088 | 0.75 |
| 0.7352 | 37.0 | 925 | 0.5991 | 0.75 |
| 0.7352 | 38.0 | 950 | 0.4963 | 0.73 |
| 0.7352 | 39.0 | 975 | 0.5086 | 0.72 |
| 0.411 | 40.0 | 1000 | 0.5203 | 0.73 |
| 0.411 | 41.0 | 1025 | 0.5844 | 0.74 |
| 0.411 | 42.0 | 1050 | 0.5285 | 0.74 |
| 0.411 | 43.0 | 1075 | 0.5553 | 0.74 |
| 0.411 | 44.0 | 1100 | 0.5588 | 0.71 |
| 0.411 | 45.0 | 1125 | 0.5392 | 0.72 |
| 0.411 | 46.0 | 1150 | 0.5494 | 0.72 |
| 0.411 | 47.0 | 1175 | 0.4982 | 0.76 |
| 0.411 | 48.0 | 1200 | 0.5374 | 0.72 |
| 0.411 | 49.0 | 1225 | 0.5730 | 0.73 |
| 0.411 | 50.0 | 1250 | 0.5149 | 0.72 |
| 0.411 | 51.0 | 1275 | 0.4949 | 0.72 |
| 0.411 | 52.0 | 1300 | 0.5295 | 0.73 |
| 0.411 | 53.0 | 1325 | 0.5223 | 0.72 |
| 0.411 | 54.0 | 1350 | 0.5617 | 0.71 |
| 0.411 | 55.0 | 1375 | 0.5373 | 0.72 |
| 0.411 | 56.0 | 1400 | 0.4857 | 0.73 |
| 0.411 | 57.0 | 1425 | 0.4954 | 0.72 |
| 0.411 | 58.0 | 1450 | 0.5024 | 0.72 |
| 0.411 | 59.0 | 1475 | 0.4971 | 0.74 |
| 0.318 | 60.0 | 1500 | 0.5265 | 0.73 |
| 0.318 | 61.0 | 1525 | 0.4967 | 0.71 |
| 0.318 | 62.0 | 1550 | 0.4972 | 0.73 |
| 0.318 | 63.0 | 1575 | 0.4908 | 0.72 |
| 0.318 | 64.0 | 1600 | 0.5056 | 0.74 |
| 0.318 | 65.0 | 1625 | 0.5231 | 0.74 |
| 0.318 | 66.0 | 1650 | 0.4737 | 0.75 |
| 0.318 | 67.0 | 1675 | 0.5016 | 0.72 |
| 0.318 | 68.0 | 1700 | 0.4988 | 0.73 |
| 0.318 | 69.0 | 1725 | 0.5276 | 0.74 |
| 0.318 | 70.0 | 1750 | 0.4912 | 0.73 |
| 0.318 | 71.0 | 1775 | 0.4865 | 0.72 |
| 0.318 | 72.0 | 1800 | 0.4754 | 0.73 |
| 0.318 | 73.0 | 1825 | 0.4922 | 0.73 |
| 0.318 | 74.0 | 1850 | 0.4884 | 0.74 |
| 0.318 | 75.0 | 1875 | 0.4868 | 0.73 |
| 0.318 | 76.0 | 1900 | 0.4872 | 0.73 |
| 0.318 | 77.0 | 1925 | 0.4848 | 0.72 |
| 0.318 | 78.0 | 1950 | 0.4923 | 0.72 |
| 0.318 | 79.0 | 1975 | 0.4888 | 0.73 |
| 0.287 | 80.0 | 2000 | 0.4913 | 0.72 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230826173415
|
dkqjrm
| 2023-08-26T10:15:51Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T08:34:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826173415'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826173415
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4136
- Accuracy: 0.71
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.6183 | 0.53 |
| No log | 2.0 | 50 | 0.4189 | 0.62 |
| No log | 3.0 | 75 | 0.4351 | 0.6 |
| No log | 4.0 | 100 | 0.4181 | 0.6 |
| No log | 5.0 | 125 | 0.4105 | 0.62 |
| No log | 6.0 | 150 | 0.4140 | 0.63 |
| No log | 7.0 | 175 | 0.4052 | 0.66 |
| No log | 8.0 | 200 | 0.4322 | 0.66 |
| No log | 9.0 | 225 | 0.4364 | 0.41 |
| No log | 10.0 | 250 | 0.4247 | 0.55 |
| No log | 11.0 | 275 | 0.4261 | 0.53 |
| No log | 12.0 | 300 | 0.4176 | 0.6 |
| No log | 13.0 | 325 | 0.4108 | 0.58 |
| No log | 14.0 | 350 | 0.4305 | 0.51 |
| No log | 15.0 | 375 | 0.4064 | 0.61 |
| No log | 16.0 | 400 | 0.4032 | 0.59 |
| No log | 17.0 | 425 | 0.4098 | 0.63 |
| No log | 18.0 | 450 | 0.4132 | 0.61 |
| No log | 19.0 | 475 | 0.3925 | 0.65 |
| 0.7171 | 20.0 | 500 | 0.3957 | 0.69 |
| 0.7171 | 21.0 | 525 | 0.4292 | 0.64 |
| 0.7171 | 22.0 | 550 | 0.4025 | 0.63 |
| 0.7171 | 23.0 | 575 | 0.3997 | 0.69 |
| 0.7171 | 24.0 | 600 | 0.4115 | 0.62 |
| 0.7171 | 25.0 | 625 | 0.4044 | 0.67 |
| 0.7171 | 26.0 | 650 | 0.4098 | 0.69 |
| 0.7171 | 27.0 | 675 | 0.4051 | 0.65 |
| 0.7171 | 28.0 | 700 | 0.4244 | 0.72 |
| 0.7171 | 29.0 | 725 | 0.4032 | 0.64 |
| 0.7171 | 30.0 | 750 | 0.4136 | 0.7 |
| 0.7171 | 31.0 | 775 | 0.3993 | 0.68 |
| 0.7171 | 32.0 | 800 | 0.4170 | 0.72 |
| 0.7171 | 33.0 | 825 | 0.4038 | 0.71 |
| 0.7171 | 34.0 | 850 | 0.4251 | 0.72 |
| 0.7171 | 35.0 | 875 | 0.4079 | 0.66 |
| 0.7171 | 36.0 | 900 | 0.4119 | 0.71 |
| 0.7171 | 37.0 | 925 | 0.4075 | 0.67 |
| 0.7171 | 38.0 | 950 | 0.4406 | 0.73 |
| 0.7171 | 39.0 | 975 | 0.4081 | 0.72 |
| 0.4731 | 40.0 | 1000 | 0.4191 | 0.67 |
| 0.4731 | 41.0 | 1025 | 0.4217 | 0.68 |
| 0.4731 | 42.0 | 1050 | 0.3983 | 0.73 |
| 0.4731 | 43.0 | 1075 | 0.4092 | 0.66 |
| 0.4731 | 44.0 | 1100 | 0.4248 | 0.69 |
| 0.4731 | 45.0 | 1125 | 0.4218 | 0.68 |
| 0.4731 | 46.0 | 1150 | 0.4371 | 0.7 |
| 0.4731 | 47.0 | 1175 | 0.4099 | 0.69 |
| 0.4731 | 48.0 | 1200 | 0.4300 | 0.69 |
| 0.4731 | 49.0 | 1225 | 0.4094 | 0.72 |
| 0.4731 | 50.0 | 1250 | 0.4206 | 0.71 |
| 0.4731 | 51.0 | 1275 | 0.4241 | 0.72 |
| 0.4731 | 52.0 | 1300 | 0.4253 | 0.66 |
| 0.4731 | 53.0 | 1325 | 0.4117 | 0.66 |
| 0.4731 | 54.0 | 1350 | 0.4174 | 0.67 |
| 0.4731 | 55.0 | 1375 | 0.4131 | 0.67 |
| 0.4731 | 56.0 | 1400 | 0.4231 | 0.67 |
| 0.4731 | 57.0 | 1425 | 0.4059 | 0.7 |
| 0.4731 | 58.0 | 1450 | 0.4168 | 0.72 |
| 0.4731 | 59.0 | 1475 | 0.4236 | 0.68 |
| 0.4204 | 60.0 | 1500 | 0.4001 | 0.68 |
| 0.4204 | 61.0 | 1525 | 0.4158 | 0.71 |
| 0.4204 | 62.0 | 1550 | 0.4303 | 0.68 |
| 0.4204 | 63.0 | 1575 | 0.4155 | 0.65 |
| 0.4204 | 64.0 | 1600 | 0.4195 | 0.66 |
| 0.4204 | 65.0 | 1625 | 0.4315 | 0.67 |
| 0.4204 | 66.0 | 1650 | 0.4240 | 0.71 |
| 0.4204 | 67.0 | 1675 | 0.4191 | 0.68 |
| 0.4204 | 68.0 | 1700 | 0.4214 | 0.71 |
| 0.4204 | 69.0 | 1725 | 0.4170 | 0.71 |
| 0.4204 | 70.0 | 1750 | 0.4158 | 0.68 |
| 0.4204 | 71.0 | 1775 | 0.4230 | 0.69 |
| 0.4204 | 72.0 | 1800 | 0.4106 | 0.69 |
| 0.4204 | 73.0 | 1825 | 0.4255 | 0.68 |
| 0.4204 | 74.0 | 1850 | 0.4223 | 0.67 |
| 0.4204 | 75.0 | 1875 | 0.4124 | 0.7 |
| 0.4204 | 76.0 | 1900 | 0.4114 | 0.7 |
| 0.4204 | 77.0 | 1925 | 0.4115 | 0.71 |
| 0.4204 | 78.0 | 1950 | 0.4136 | 0.71 |
| 0.4204 | 79.0 | 1975 | 0.4150 | 0.71 |
| 0.3939 | 80.0 | 2000 | 0.4136 | 0.71 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
TenAI/stable-diffusion-webui
|
TenAI
| 2023-08-26T10:11:37Z | 0 | 4 | null |
[
"arxiv:2211.06679",
"region:us"
] | null | 2023-08-25T10:34:29Z |
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.

## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
- Original txt2img and img2img modes
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a `((tuxedo))` - will pay more attention to tuxedo
- a man in a `(tuxedo:1.21)` - alternative syntax
- select text and press `Ctrl+Up` or `Ctrl+Down` (or `Command+Up` or `Command+Down` if you're on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR and Swin2SR ([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with `--allow-code` to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
- Seed resizing, a way to generate same image but at slightly different resolution
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add `--xformers` to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/release/python-3106/) (Newer version of Python does not support torch), checking "Add Python to PATH".
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
### Installation on Apple Silicon
Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
## Contributing
Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
## Documentation
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
|
dkqjrm/20230826172956
|
dkqjrm
| 2023-08-26T09:56:28Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T08:30:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826172956'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826172956
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1602
- Accuracy: 0.54
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.2150 | 0.6 |
| No log | 2.0 | 50 | 0.1887 | 0.59 |
| No log | 3.0 | 75 | 0.1839 | 0.58 |
| No log | 4.0 | 100 | 0.1657 | 0.45 |
| No log | 5.0 | 125 | 0.1619 | 0.58 |
| No log | 6.0 | 150 | 0.1615 | 0.52 |
| No log | 7.0 | 175 | 0.1579 | 0.57 |
| No log | 8.0 | 200 | 0.1583 | 0.62 |
| No log | 9.0 | 225 | 0.1615 | 0.52 |
| No log | 10.0 | 250 | 0.1586 | 0.64 |
| No log | 11.0 | 275 | 0.1599 | 0.63 |
| No log | 12.0 | 300 | 0.1615 | 0.5 |
| No log | 13.0 | 325 | 0.1588 | 0.55 |
| No log | 14.0 | 350 | 0.1611 | 0.44 |
| No log | 15.0 | 375 | 0.1587 | 0.54 |
| No log | 16.0 | 400 | 0.1585 | 0.6 |
| No log | 17.0 | 425 | 0.1574 | 0.54 |
| No log | 18.0 | 450 | 0.1599 | 0.51 |
| No log | 19.0 | 475 | 0.1580 | 0.56 |
| 0.6147 | 20.0 | 500 | 0.1593 | 0.51 |
| 0.6147 | 21.0 | 525 | 0.1612 | 0.39 |
| 0.6147 | 22.0 | 550 | 0.1588 | 0.57 |
| 0.6147 | 23.0 | 575 | 0.1583 | 0.6 |
| 0.6147 | 24.0 | 600 | 0.1588 | 0.61 |
| 0.6147 | 25.0 | 625 | 0.1585 | 0.55 |
| 0.6147 | 26.0 | 650 | 0.1582 | 0.52 |
| 0.6147 | 27.0 | 675 | 0.1625 | 0.48 |
| 0.6147 | 28.0 | 700 | 0.1617 | 0.48 |
| 0.6147 | 29.0 | 725 | 0.1607 | 0.57 |
| 0.6147 | 30.0 | 750 | 0.1589 | 0.55 |
| 0.6147 | 31.0 | 775 | 0.1584 | 0.58 |
| 0.6147 | 32.0 | 800 | 0.1593 | 0.57 |
| 0.6147 | 33.0 | 825 | 0.1608 | 0.49 |
| 0.6147 | 34.0 | 850 | 0.1605 | 0.5 |
| 0.6147 | 35.0 | 875 | 0.1601 | 0.54 |
| 0.6147 | 36.0 | 900 | 0.1590 | 0.54 |
| 0.6147 | 37.0 | 925 | 0.1651 | 0.45 |
| 0.6147 | 38.0 | 950 | 0.1613 | 0.44 |
| 0.6147 | 39.0 | 975 | 0.1630 | 0.5 |
| 0.5279 | 40.0 | 1000 | 0.1598 | 0.48 |
| 0.5279 | 41.0 | 1025 | 0.1605 | 0.52 |
| 0.5279 | 42.0 | 1050 | 0.1598 | 0.46 |
| 0.5279 | 43.0 | 1075 | 0.1599 | 0.51 |
| 0.5279 | 44.0 | 1100 | 0.1611 | 0.5 |
| 0.5279 | 45.0 | 1125 | 0.1611 | 0.49 |
| 0.5279 | 46.0 | 1150 | 0.1602 | 0.56 |
| 0.5279 | 47.0 | 1175 | 0.1596 | 0.5 |
| 0.5279 | 48.0 | 1200 | 0.1605 | 0.59 |
| 0.5279 | 49.0 | 1225 | 0.1593 | 0.53 |
| 0.5279 | 50.0 | 1250 | 0.1584 | 0.51 |
| 0.5279 | 51.0 | 1275 | 0.1592 | 0.52 |
| 0.5279 | 52.0 | 1300 | 0.1588 | 0.49 |
| 0.5279 | 53.0 | 1325 | 0.1610 | 0.55 |
| 0.5279 | 54.0 | 1350 | 0.1591 | 0.53 |
| 0.5279 | 55.0 | 1375 | 0.1585 | 0.49 |
| 0.5279 | 56.0 | 1400 | 0.1591 | 0.46 |
| 0.5279 | 57.0 | 1425 | 0.1584 | 0.44 |
| 0.5279 | 58.0 | 1450 | 0.1612 | 0.47 |
| 0.5279 | 59.0 | 1475 | 0.1626 | 0.43 |
| 0.4515 | 60.0 | 1500 | 0.1607 | 0.46 |
| 0.4515 | 61.0 | 1525 | 0.1599 | 0.49 |
| 0.4515 | 62.0 | 1550 | 0.1590 | 0.49 |
| 0.4515 | 63.0 | 1575 | 0.1601 | 0.54 |
| 0.4515 | 64.0 | 1600 | 0.1606 | 0.49 |
| 0.4515 | 65.0 | 1625 | 0.1592 | 0.5 |
| 0.4515 | 66.0 | 1650 | 0.1605 | 0.52 |
| 0.4515 | 67.0 | 1675 | 0.1605 | 0.51 |
| 0.4515 | 68.0 | 1700 | 0.1603 | 0.54 |
| 0.4515 | 69.0 | 1725 | 0.1603 | 0.55 |
| 0.4515 | 70.0 | 1750 | 0.1604 | 0.56 |
| 0.4515 | 71.0 | 1775 | 0.1615 | 0.54 |
| 0.4515 | 72.0 | 1800 | 0.1593 | 0.5 |
| 0.4515 | 73.0 | 1825 | 0.1601 | 0.54 |
| 0.4515 | 74.0 | 1850 | 0.1603 | 0.57 |
| 0.4515 | 75.0 | 1875 | 0.1596 | 0.51 |
| 0.4515 | 76.0 | 1900 | 0.1608 | 0.54 |
| 0.4515 | 77.0 | 1925 | 0.1603 | 0.56 |
| 0.4515 | 78.0 | 1950 | 0.1600 | 0.55 |
| 0.4515 | 79.0 | 1975 | 0.1602 | 0.55 |
| 0.4114 | 80.0 | 2000 | 0.1602 | 0.54 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_char_cv12_pad_lob100_low__0030
|
bigmorning
| 2023-08-26T09:32:10Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-26T09:32:03Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_char_cv12_pad_lob100_low__0030
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_char_cv12_pad_lob100_low__0030
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0160
- Train Accuracy: 0.1113
- Train Wermet: 3.5049
- Validation Loss: 0.4097
- Validation Accuracy: 0.0636
- Validation Wermet: 10.4027
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.3330 | 0.0999 | 1.7359 | 0.3779 | 0.0615 | 4.7471 | 0 |
| 0.3093 | 0.1007 | 2.0563 | 0.3652 | 0.0618 | 7.2181 | 1 |
| 0.2869 | 0.1015 | 2.0654 | 0.3539 | 0.0620 | 8.6857 | 2 |
| 0.2672 | 0.1022 | 2.1925 | 0.3443 | 0.0623 | 8.0906 | 3 |
| 0.2488 | 0.1028 | 2.3286 | 0.3305 | 0.0626 | 9.1756 | 4 |
| 0.2316 | 0.1034 | 2.4212 | 0.3300 | 0.0626 | 8.1427 | 5 |
| 0.2163 | 0.1039 | 2.5012 | 0.3183 | 0.0629 | 8.3043 | 6 |
| 0.2018 | 0.1045 | 2.7267 | 0.3109 | 0.0631 | 9.5329 | 7 |
| 0.1878 | 0.1050 | 2.7034 | 0.3053 | 0.0632 | 7.9014 | 8 |
| 0.1749 | 0.1054 | 2.8719 | 0.3063 | 0.0632 | 9.0257 | 9 |
| 0.1628 | 0.1058 | 2.8764 | 0.3033 | 0.0634 | 9.1336 | 10 |
| 0.1510 | 0.1063 | 2.8441 | 0.3046 | 0.0634 | 8.6064 | 11 |
| 0.1391 | 0.1067 | 2.9377 | 0.3030 | 0.0635 | 9.1326 | 12 |
| 0.1280 | 0.1071 | 2.9433 | 0.3025 | 0.0636 | 9.4533 | 13 |
| 0.1182 | 0.1075 | 3.1399 | 0.3076 | 0.0636 | 9.9836 | 14 |
| 0.1086 | 0.1078 | 3.2411 | 0.3096 | 0.0636 | 8.8470 | 15 |
| 0.0983 | 0.1082 | 3.2622 | 0.3125 | 0.0636 | 9.1506 | 16 |
| 0.0889 | 0.1086 | 3.3368 | 0.3184 | 0.0636 | 8.9635 | 17 |
| 0.0803 | 0.1089 | 3.2742 | 0.3204 | 0.0637 | 9.3550 | 18 |
| 0.0720 | 0.1092 | 3.4052 | 0.3258 | 0.0637 | 10.1082 | 19 |
| 0.0637 | 0.1096 | 3.4287 | 0.3342 | 0.0637 | 10.3977 | 20 |
| 0.0566 | 0.1098 | 3.4708 | 0.3411 | 0.0636 | 10.6479 | 21 |
| 0.0498 | 0.1101 | 3.4462 | 0.3463 | 0.0637 | 10.1602 | 22 |
| 0.0429 | 0.1104 | 3.4056 | 0.3588 | 0.0636 | 9.7172 | 23 |
| 0.0374 | 0.1106 | 3.4477 | 0.3656 | 0.0636 | 9.4476 | 24 |
| 0.0325 | 0.1108 | 3.4474 | 0.3712 | 0.0637 | 9.6926 | 25 |
| 0.0279 | 0.1109 | 3.4263 | 0.3836 | 0.0636 | 10.0768 | 26 |
| 0.0233 | 0.1111 | 3.4779 | 0.3873 | 0.0637 | 9.8123 | 27 |
| 0.0196 | 0.1112 | 3.5329 | 0.4015 | 0.0636 | 10.0477 | 28 |
| 0.0160 | 0.1113 | 3.5049 | 0.4097 | 0.0636 | 10.4027 | 29 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
dt-and-vanilla-ardt/ardt-vanilla-arrl_train_walker2d_high-2608_0832-66
|
dt-and-vanilla-ardt
| 2023-08-26T08:51:32Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T07:34:14Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-arrl_train_walker2d_high-2608_0832-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-arrl_train_walker2d_high-2608_0832-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dilums/ppo-LunarLander-v2
|
dilums
| 2023-08-26T08:44:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-26T08:43:47Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.81 +/- 19.32
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dkqjrm/20230826161117
|
dkqjrm
| 2023-08-26T08:43:30Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T07:11:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826161117'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826161117
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5294
- Accuracy: 0.67
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.6448 | 0.4 |
| No log | 2.0 | 50 | 0.7950 | 0.65 |
| No log | 3.0 | 75 | 0.6181 | 0.54 |
| No log | 4.0 | 100 | 0.5601 | 0.6 |
| No log | 5.0 | 125 | 0.5816 | 0.42 |
| No log | 6.0 | 150 | 0.5957 | 0.43 |
| No log | 7.0 | 175 | 0.5331 | 0.61 |
| No log | 8.0 | 200 | 0.5507 | 0.61 |
| No log | 9.0 | 225 | 0.5438 | 0.62 |
| No log | 10.0 | 250 | 0.5455 | 0.65 |
| No log | 11.0 | 275 | 0.5141 | 0.65 |
| No log | 12.0 | 300 | 0.5019 | 0.71 |
| No log | 13.0 | 325 | 0.6824 | 0.7 |
| No log | 14.0 | 350 | 0.5735 | 0.73 |
| No log | 15.0 | 375 | 0.5578 | 0.69 |
| No log | 16.0 | 400 | 0.5607 | 0.72 |
| No log | 17.0 | 425 | 0.5974 | 0.71 |
| No log | 18.0 | 450 | 0.8102 | 0.71 |
| No log | 19.0 | 475 | 0.6757 | 0.73 |
| 0.7598 | 20.0 | 500 | 0.5266 | 0.74 |
| 0.7598 | 21.0 | 525 | 0.6271 | 0.69 |
| 0.7598 | 22.0 | 550 | 0.6341 | 0.7 |
| 0.7598 | 23.0 | 575 | 0.6874 | 0.7 |
| 0.7598 | 24.0 | 600 | 0.5264 | 0.72 |
| 0.7598 | 25.0 | 625 | 0.5148 | 0.73 |
| 0.7598 | 26.0 | 650 | 0.5760 | 0.77 |
| 0.7598 | 27.0 | 675 | 0.6581 | 0.71 |
| 0.7598 | 28.0 | 700 | 0.6479 | 0.71 |
| 0.7598 | 29.0 | 725 | 0.6960 | 0.69 |
| 0.7598 | 30.0 | 750 | 0.6919 | 0.7 |
| 0.7598 | 31.0 | 775 | 0.6421 | 0.68 |
| 0.7598 | 32.0 | 800 | 0.5681 | 0.68 |
| 0.7598 | 33.0 | 825 | 0.5631 | 0.68 |
| 0.7598 | 34.0 | 850 | 0.5676 | 0.66 |
| 0.7598 | 35.0 | 875 | 0.5389 | 0.68 |
| 0.7598 | 36.0 | 900 | 0.6267 | 0.68 |
| 0.7598 | 37.0 | 925 | 0.6107 | 0.65 |
| 0.7598 | 38.0 | 950 | 0.5359 | 0.66 |
| 0.7598 | 39.0 | 975 | 0.5741 | 0.67 |
| 0.4266 | 40.0 | 1000 | 0.5928 | 0.69 |
| 0.4266 | 41.0 | 1025 | 0.5307 | 0.68 |
| 0.4266 | 42.0 | 1050 | 0.5909 | 0.66 |
| 0.4266 | 43.0 | 1075 | 0.5733 | 0.66 |
| 0.4266 | 44.0 | 1100 | 0.5561 | 0.66 |
| 0.4266 | 45.0 | 1125 | 0.5600 | 0.69 |
| 0.4266 | 46.0 | 1150 | 0.5228 | 0.66 |
| 0.4266 | 47.0 | 1175 | 0.5383 | 0.7 |
| 0.4266 | 48.0 | 1200 | 0.5643 | 0.69 |
| 0.4266 | 49.0 | 1225 | 0.5493 | 0.7 |
| 0.4266 | 50.0 | 1250 | 0.5576 | 0.7 |
| 0.4266 | 51.0 | 1275 | 0.5543 | 0.68 |
| 0.4266 | 52.0 | 1300 | 0.5615 | 0.69 |
| 0.4266 | 53.0 | 1325 | 0.5358 | 0.67 |
| 0.4266 | 54.0 | 1350 | 0.5405 | 0.69 |
| 0.4266 | 55.0 | 1375 | 0.5327 | 0.69 |
| 0.4266 | 56.0 | 1400 | 0.5645 | 0.67 |
| 0.4266 | 57.0 | 1425 | 0.5240 | 0.67 |
| 0.4266 | 58.0 | 1450 | 0.5402 | 0.67 |
| 0.4266 | 59.0 | 1475 | 0.5495 | 0.68 |
| 0.3249 | 60.0 | 1500 | 0.5624 | 0.66 |
| 0.3249 | 61.0 | 1525 | 0.5513 | 0.67 |
| 0.3249 | 62.0 | 1550 | 0.5537 | 0.68 |
| 0.3249 | 63.0 | 1575 | 0.5444 | 0.68 |
| 0.3249 | 64.0 | 1600 | 0.5553 | 0.68 |
| 0.3249 | 65.0 | 1625 | 0.5221 | 0.68 |
| 0.3249 | 66.0 | 1650 | 0.5136 | 0.68 |
| 0.3249 | 67.0 | 1675 | 0.5231 | 0.69 |
| 0.3249 | 68.0 | 1700 | 0.5305 | 0.69 |
| 0.3249 | 69.0 | 1725 | 0.5278 | 0.68 |
| 0.3249 | 70.0 | 1750 | 0.5440 | 0.66 |
| 0.3249 | 71.0 | 1775 | 0.5411 | 0.67 |
| 0.3249 | 72.0 | 1800 | 0.5346 | 0.69 |
| 0.3249 | 73.0 | 1825 | 0.5241 | 0.67 |
| 0.3249 | 74.0 | 1850 | 0.5425 | 0.67 |
| 0.3249 | 75.0 | 1875 | 0.5213 | 0.67 |
| 0.3249 | 76.0 | 1900 | 0.5405 | 0.66 |
| 0.3249 | 77.0 | 1925 | 0.5251 | 0.67 |
| 0.3249 | 78.0 | 1950 | 0.5300 | 0.67 |
| 0.3249 | 79.0 | 1975 | 0.5285 | 0.67 |
| 0.2946 | 80.0 | 2000 | 0.5294 | 0.67 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230826161128
|
dkqjrm
| 2023-08-26T08:42:15Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T07:11:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826161128'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826161128
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2753
- Accuracy: 0.71
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.3969 | 0.6 |
| No log | 2.0 | 50 | 0.4709 | 0.5 |
| No log | 3.0 | 75 | 0.3341 | 0.42 |
| No log | 4.0 | 100 | 0.3011 | 0.54 |
| No log | 5.0 | 125 | 0.3119 | 0.36 |
| No log | 6.0 | 150 | 0.3297 | 0.37 |
| No log | 7.0 | 175 | 0.2928 | 0.53 |
| No log | 8.0 | 200 | 0.3079 | 0.63 |
| No log | 9.0 | 225 | 0.2875 | 0.61 |
| No log | 10.0 | 250 | 0.2906 | 0.54 |
| No log | 11.0 | 275 | 0.2904 | 0.62 |
| No log | 12.0 | 300 | 0.2946 | 0.52 |
| No log | 13.0 | 325 | 0.2942 | 0.51 |
| No log | 14.0 | 350 | 0.2935 | 0.56 |
| No log | 15.0 | 375 | 0.2913 | 0.58 |
| No log | 16.0 | 400 | 0.2886 | 0.6 |
| No log | 17.0 | 425 | 0.2900 | 0.6 |
| No log | 18.0 | 450 | 0.2874 | 0.59 |
| No log | 19.0 | 475 | 0.2910 | 0.6 |
| 0.6674 | 20.0 | 500 | 0.2931 | 0.47 |
| 0.6674 | 21.0 | 525 | 0.2909 | 0.51 |
| 0.6674 | 22.0 | 550 | 0.2855 | 0.62 |
| 0.6674 | 23.0 | 575 | 0.2881 | 0.61 |
| 0.6674 | 24.0 | 600 | 0.2878 | 0.6 |
| 0.6674 | 25.0 | 625 | 0.2874 | 0.57 |
| 0.6674 | 26.0 | 650 | 0.2857 | 0.54 |
| 0.6674 | 27.0 | 675 | 0.2871 | 0.6 |
| 0.6674 | 28.0 | 700 | 0.2864 | 0.59 |
| 0.6674 | 29.0 | 725 | 0.2862 | 0.62 |
| 0.6674 | 30.0 | 750 | 0.2866 | 0.58 |
| 0.6674 | 31.0 | 775 | 0.2837 | 0.63 |
| 0.6674 | 32.0 | 800 | 0.2859 | 0.58 |
| 0.6674 | 33.0 | 825 | 0.2841 | 0.59 |
| 0.6674 | 34.0 | 850 | 0.2878 | 0.62 |
| 0.6674 | 35.0 | 875 | 0.2889 | 0.61 |
| 0.6674 | 36.0 | 900 | 0.2830 | 0.59 |
| 0.6674 | 37.0 | 925 | 0.2824 | 0.59 |
| 0.6674 | 38.0 | 950 | 0.2801 | 0.63 |
| 0.6674 | 39.0 | 975 | 0.2931 | 0.65 |
| 0.5477 | 40.0 | 1000 | 0.2788 | 0.64 |
| 0.5477 | 41.0 | 1025 | 0.2892 | 0.63 |
| 0.5477 | 42.0 | 1050 | 0.2937 | 0.58 |
| 0.5477 | 43.0 | 1075 | 0.2886 | 0.66 |
| 0.5477 | 44.0 | 1100 | 0.2842 | 0.62 |
| 0.5477 | 45.0 | 1125 | 0.2857 | 0.6 |
| 0.5477 | 46.0 | 1150 | 0.2834 | 0.62 |
| 0.5477 | 47.0 | 1175 | 0.2824 | 0.56 |
| 0.5477 | 48.0 | 1200 | 0.2866 | 0.65 |
| 0.5477 | 49.0 | 1225 | 0.2801 | 0.63 |
| 0.5477 | 50.0 | 1250 | 0.2851 | 0.62 |
| 0.5477 | 51.0 | 1275 | 0.2829 | 0.6 |
| 0.5477 | 52.0 | 1300 | 0.2900 | 0.59 |
| 0.5477 | 53.0 | 1325 | 0.2782 | 0.59 |
| 0.5477 | 54.0 | 1350 | 0.2793 | 0.59 |
| 0.5477 | 55.0 | 1375 | 0.2809 | 0.6 |
| 0.5477 | 56.0 | 1400 | 0.2815 | 0.64 |
| 0.5477 | 57.0 | 1425 | 0.2798 | 0.68 |
| 0.5477 | 58.0 | 1450 | 0.2831 | 0.67 |
| 0.5477 | 59.0 | 1475 | 0.2795 | 0.66 |
| 0.4601 | 60.0 | 1500 | 0.2747 | 0.68 |
| 0.4601 | 61.0 | 1525 | 0.2725 | 0.73 |
| 0.4601 | 62.0 | 1550 | 0.2840 | 0.66 |
| 0.4601 | 63.0 | 1575 | 0.2739 | 0.67 |
| 0.4601 | 64.0 | 1600 | 0.2796 | 0.69 |
| 0.4601 | 65.0 | 1625 | 0.2782 | 0.65 |
| 0.4601 | 66.0 | 1650 | 0.2757 | 0.7 |
| 0.4601 | 67.0 | 1675 | 0.2759 | 0.69 |
| 0.4601 | 68.0 | 1700 | 0.2779 | 0.67 |
| 0.4601 | 69.0 | 1725 | 0.2822 | 0.67 |
| 0.4601 | 70.0 | 1750 | 0.2813 | 0.65 |
| 0.4601 | 71.0 | 1775 | 0.2818 | 0.68 |
| 0.4601 | 72.0 | 1800 | 0.2865 | 0.69 |
| 0.4601 | 73.0 | 1825 | 0.2770 | 0.71 |
| 0.4601 | 74.0 | 1850 | 0.2822 | 0.69 |
| 0.4601 | 75.0 | 1875 | 0.2783 | 0.71 |
| 0.4601 | 76.0 | 1900 | 0.2764 | 0.71 |
| 0.4601 | 77.0 | 1925 | 0.2772 | 0.69 |
| 0.4601 | 78.0 | 1950 | 0.2759 | 0.7 |
| 0.4601 | 79.0 | 1975 | 0.2751 | 0.72 |
| 0.4329 | 80.0 | 2000 | 0.2753 | 0.71 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_char_cv12_pad_lob100_low__0010
|
bigmorning
| 2023-08-26T08:39:44Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-26T08:39:36Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_char_cv12_pad_lob100_low__0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_char_cv12_pad_lob100_low__0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1749
- Train Accuracy: 0.1054
- Train Wermet: 2.8719
- Validation Loss: 0.3063
- Validation Accuracy: 0.0632
- Validation Wermet: 9.0257
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 0.3330 | 0.0999 | 1.7359 | 0.3779 | 0.0615 | 4.7471 | 0 |
| 0.3093 | 0.1007 | 2.0563 | 0.3652 | 0.0618 | 7.2181 | 1 |
| 0.2869 | 0.1015 | 2.0654 | 0.3539 | 0.0620 | 8.6857 | 2 |
| 0.2672 | 0.1022 | 2.1925 | 0.3443 | 0.0623 | 8.0906 | 3 |
| 0.2488 | 0.1028 | 2.3286 | 0.3305 | 0.0626 | 9.1756 | 4 |
| 0.2316 | 0.1034 | 2.4212 | 0.3300 | 0.0626 | 8.1427 | 5 |
| 0.2163 | 0.1039 | 2.5012 | 0.3183 | 0.0629 | 8.3043 | 6 |
| 0.2018 | 0.1045 | 2.7267 | 0.3109 | 0.0631 | 9.5329 | 7 |
| 0.1878 | 0.1050 | 2.7034 | 0.3053 | 0.0632 | 7.9014 | 8 |
| 0.1749 | 0.1054 | 2.8719 | 0.3063 | 0.0632 | 9.0257 | 9 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
quoctrungle/llama2-qlora-finetunined-openassistant-guanaco
|
quoctrungle
| 2023-08-26T08:37:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T08:37:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
dkqjrm/20230826161130
|
dkqjrm
| 2023-08-26T08:29:42Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T07:11:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826161130'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826161130
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Accuracy: 0.39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.6392 | 0.43 |
| No log | 2.0 | 50 | 0.1729 | 0.41 |
| No log | 3.0 | 75 | 0.1658 | 0.61 |
| No log | 4.0 | 100 | 0.1579 | 0.57 |
| No log | 5.0 | 125 | 0.1678 | 0.4 |
| No log | 6.0 | 150 | 0.1583 | 0.55 |
| No log | 7.0 | 175 | 0.1650 | 0.6 |
| No log | 8.0 | 200 | 0.1643 | 0.62 |
| No log | 9.0 | 225 | 0.1594 | 0.48 |
| No log | 10.0 | 250 | 0.1572 | 0.61 |
| No log | 11.0 | 275 | 0.1660 | 0.4 |
| No log | 12.0 | 300 | 0.1570 | 0.63 |
| No log | 13.0 | 325 | 0.1589 | 0.51 |
| No log | 14.0 | 350 | 0.1581 | 0.42 |
| No log | 15.0 | 375 | 0.1582 | 0.5 |
| No log | 16.0 | 400 | 0.1576 | 0.53 |
| No log | 17.0 | 425 | 0.1580 | 0.52 |
| No log | 18.0 | 450 | 0.1581 | 0.55 |
| No log | 19.0 | 475 | 0.1583 | 0.45 |
| 0.621 | 20.0 | 500 | 0.1606 | 0.52 |
| 0.621 | 21.0 | 525 | 0.1583 | 0.52 |
| 0.621 | 22.0 | 550 | 0.1573 | 0.49 |
| 0.621 | 23.0 | 575 | 0.1582 | 0.43 |
| 0.621 | 24.0 | 600 | 0.1581 | 0.53 |
| 0.621 | 25.0 | 625 | 0.1582 | 0.49 |
| 0.621 | 26.0 | 650 | 0.1582 | 0.5 |
| 0.621 | 27.0 | 675 | 0.1583 | 0.53 |
| 0.621 | 28.0 | 700 | 0.1586 | 0.47 |
| 0.621 | 29.0 | 725 | 0.1585 | 0.48 |
| 0.621 | 30.0 | 750 | 0.1584 | 0.46 |
| 0.621 | 31.0 | 775 | 0.1582 | 0.55 |
| 0.621 | 32.0 | 800 | 0.1582 | 0.53 |
| 0.621 | 33.0 | 825 | 0.1583 | 0.51 |
| 0.621 | 34.0 | 850 | 0.1585 | 0.39 |
| 0.621 | 35.0 | 875 | 0.1582 | 0.69 |
| 0.621 | 36.0 | 900 | 0.1583 | 0.48 |
| 0.621 | 37.0 | 925 | 0.1582 | 0.61 |
| 0.621 | 38.0 | 950 | 0.1580 | 0.63 |
| 0.621 | 39.0 | 975 | 0.1581 | 0.47 |
| 0.4969 | 40.0 | 1000 | 0.1582 | 0.49 |
| 0.4969 | 41.0 | 1025 | 0.1583 | 0.49 |
| 0.4969 | 42.0 | 1050 | 0.1583 | 0.47 |
| 0.4969 | 43.0 | 1075 | 0.1581 | 0.52 |
| 0.4969 | 44.0 | 1100 | 0.1584 | 0.47 |
| 0.4969 | 45.0 | 1125 | 0.1584 | 0.35 |
| 0.4969 | 46.0 | 1150 | 0.1582 | 0.56 |
| 0.4969 | 47.0 | 1175 | 0.1582 | 0.54 |
| 0.4969 | 48.0 | 1200 | 0.1582 | 0.53 |
| 0.4969 | 49.0 | 1225 | 0.1582 | 0.56 |
| 0.4969 | 50.0 | 1250 | 0.1582 | 0.54 |
| 0.4969 | 51.0 | 1275 | 0.1582 | 0.57 |
| 0.4969 | 52.0 | 1300 | 0.1582 | 0.52 |
| 0.4969 | 53.0 | 1325 | 0.1581 | 0.59 |
| 0.4969 | 54.0 | 1350 | 0.1582 | 0.55 |
| 0.4969 | 55.0 | 1375 | 0.1585 | 0.41 |
| 0.4969 | 56.0 | 1400 | 0.1584 | 0.45 |
| 0.4969 | 57.0 | 1425 | 0.1583 | 0.54 |
| 0.4969 | 58.0 | 1450 | 0.1583 | 0.41 |
| 0.4969 | 59.0 | 1475 | 0.1583 | 0.42 |
| 0.4428 | 60.0 | 1500 | 0.1583 | 0.4 |
| 0.4428 | 61.0 | 1525 | 0.1583 | 0.59 |
| 0.4428 | 62.0 | 1550 | 0.1582 | 0.65 |
| 0.4428 | 63.0 | 1575 | 0.1581 | 0.64 |
| 0.4428 | 64.0 | 1600 | 0.1581 | 0.59 |
| 0.4428 | 65.0 | 1625 | 0.1583 | 0.42 |
| 0.4428 | 66.0 | 1650 | 0.1582 | 0.5 |
| 0.4428 | 67.0 | 1675 | 0.1583 | 0.43 |
| 0.4428 | 68.0 | 1700 | 0.1584 | 0.39 |
| 0.4428 | 69.0 | 1725 | 0.1583 | 0.5 |
| 0.4428 | 70.0 | 1750 | 0.1583 | 0.49 |
| 0.4428 | 71.0 | 1775 | 0.1583 | 0.48 |
| 0.4428 | 72.0 | 1800 | 0.1584 | 0.29 |
| 0.4428 | 73.0 | 1825 | 0.1583 | 0.4 |
| 0.4428 | 74.0 | 1850 | 0.1582 | 0.59 |
| 0.4428 | 75.0 | 1875 | 0.1582 | 0.59 |
| 0.4428 | 76.0 | 1900 | 0.1582 | 0.53 |
| 0.4428 | 77.0 | 1925 | 0.1583 | 0.33 |
| 0.4428 | 78.0 | 1950 | 0.1583 | 0.35 |
| 0.4428 | 79.0 | 1975 | 0.1583 | 0.36 |
| 0.4082 | 80.0 | 2000 | 0.1582 | 0.39 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bedus-creation/eng-limbu-model-001
|
bedus-creation
| 2023-08-26T08:24:12Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-26T08:02:40Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: bedus-creation/eng-limbu-model-001
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bedus-creation/eng-limbu-model-001
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5808
- Validation Loss: 0.4900
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.7083 | 0.5906 | 0 |
| 0.6328 | 0.5323 | 1 |
| 0.5808 | 0.4900 | 2 |
### Framework versions
- Transformers 4.32.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
rishabh063/lora-trained-xl-monkey2
|
rishabh063
| 2023-08-26T08:14:03Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-26T07:33:58Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of snksnk Monkey
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - rishabh063/lora-trained-xl-monkey2
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of snksnk Monkey using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
rks33/bert-finetuned-squad
|
rks33
| 2023-08-26T08:11:57Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-25T17:05:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jlpan/starcoder-js2py-snippet2
|
jlpan
| 2023-08-26T07:33:30Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:bigcode/starcoder",
"base_model:finetune:bigcode/starcoder",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-08-26T06:09:31Z |
---
license: bigcode-openrail-m
base_model: bigcode/starcoder
tags:
- generated_from_trainer
model-index:
- name: starcoder-js2py-snippet2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starcoder-js2py-snippet2
This model is a fine-tuned version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2152 | 0.17 | 25 | 0.2018 |
| 0.2156 | 0.33 | 50 | 0.1978 |
| 0.2093 | 0.5 | 75 | 0.1960 |
| 0.2013 | 0.67 | 100 | 0.1954 |
| 0.1836 | 1.02 | 125 | 0.1949 |
| 0.2036 | 1.19 | 150 | 0.1941 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
chunwoolee0/mt5_small_wmt16_de_en
|
chunwoolee0
| 2023-08-26T07:33:15Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-23T15:30:29Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- rouge
- sacrebleu
model-index:
- name: mt5_small_wmt16_de_en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
config: de-en
split: validation
args: de-en
metrics:
- name: Rouge1
type: rouge
value: 0.3666
- name: Sacrebleu
type: sacrebleu
value: 6.4622
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5_small_wmt16_de_en
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4612
- Rouge1: 0.3666
- Rouge2: 0.147
- Rougel: 0.3362
- Sacrebleu: 6.4622
## Model description
Multilingual T5 (mT5) is a massively multilingual pretrained text-to-text transformer model,
trained following a similar recipe as T5.
## Intended uses & limitations
This is tried to be familiarized with the mt5 model in order to use it for the translation of English to Korean.
## Training and evaluation data
This work was done as an exercise for English-Korean translation,
so I trained by selecting only very small part of a very large original dataset.
Therefore, the quality is not expected to be very good.
이 일은 영어 한국어 번역을 위한 연습으로 한 것이기 때문에 매우 큰 원 dataset에서 아주 작은 크기만의 글뭉치만 선택을 해서 훈련을 했다.
따라서 질은 그리 좋지 않을 것으로 예상된다.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Sacrebleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.3059 | 1.6 | 500 | 2.5597 | 0.3398 | 0.1261 | 0.3068 | 5.5524 |
| 2.4093 | 3.2 | 1000 | 2.4996 | 0.3609 | 0.144 | 0.3304 | 6.2002 |
| 2.2322 | 4.8 | 1500 | 2.4612 | 0.3666 | 0.147 | 0.3362 | 6.4622 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dt-and-vanilla-ardt/ardt-vanilla-arrl_train_walker2d_high-2608_0712-33
|
dt-and-vanilla-ardt
| 2023-08-26T07:32:30Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T06:14:29Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-arrl_train_walker2d_high-2608_0712-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-arrl_train_walker2d_high-2608_0712-33
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
GrantW65/q-taxi-v3
|
GrantW65
| 2023-08-26T07:20:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-26T07:20:52Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="GrantW65/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
geekedits/absoluterealityimpainting
|
geekedits
| 2023-08-26T07:17:15Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-26T06:54:13Z |
---
license: bigscience-openrail-m
---
|
Andyrasika/donut-base-sroie
|
Andyrasika
| 2023-08-26T06:55:46Z | 46 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:darentang/sroie",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-26T05:14:52Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- darentang/sroie
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
Donut 🍩, Document understanding transformer, is a new method of document understanding
that utilizes an OCR-free end-to-end Transformer model. Donut does not require off-the-shelf OCR
engines/APIs, yet it shows state-of-the-art performances on various visual document understanding tasks,
such as visual document classification or information extraction (a.k.a. document parsing).
## Intended uses & limitations
Basic Donut model
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
chunwoolee0/mt5_small_kde4_en_ko
|
chunwoolee0
| 2023-08-26T06:13:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-24T07:06:08Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- rouge
- sacrebleu
model-index:
- name: mt5_small_kde4_en_ko
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-ko
split: train
args: en-ko
metrics:
- name: Rouge1
type: rouge
value: 0.0832
- name: Sacrebleu
type: sacrebleu
value: 3.3559
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5_small_kde4_en_ko
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1644
- Rouge1: 0.0832
- Rouge2: 0.0195
- Rougel: 0.0826
- Sacrebleu: 3.3559
## Model description
This model tries to achieve translation from English to Korean using google's mt5 multilingual model.
## Intended uses & limitations
Translation from English to Korean
## Usage
You can use this model directly with a pipeline for translation language modeling:
```python
>>> from transformers import pipeline
>>> translator = pipeline('translation', model='chunwoolee0/ke_t5_base_bongsoo_en_ko')
>>> translator("Let us go for a walk after lunch.")
[{'translation_text': '오류를 방문하십시오.'}]
The translation fails completely.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Sacrebleu |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 15.8735 | 0.46 | 500 | 6.5322 | 0.0101 | 0.0004 | 0.0102 | 0.464 |
| 7.183 | 0.93 | 1000 | 4.2298 | 0.0203 | 0.0012 | 0.02 | 0.6102 |
| 5.4447 | 1.39 | 1500 | 3.5600 | 0.0399 | 0.005 | 0.0396 | 1.5798 |
| 4.8372 | 1.85 | 2000 | 3.3343 | 0.0537 | 0.0088 | 0.0533 | 3.0115 |
| 4.5579 | 2.32 | 2500 | 3.2131 | 0.0732 | 0.016 | 0.0729 | 3.3743 |
| 4.4532 | 2.78 | 3000 | 3.1644 | 0.0832 | 0.0195 | 0.0826 | 3.3559 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jlpan/starcoder-js2py-snippet1
|
jlpan
| 2023-08-26T06:00:15Z | 3 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:bigcode/starcoder",
"base_model:adapter:bigcode/starcoder",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-08-26T02:55:19Z |
---
license: bigcode-openrail-m
base_model: bigcode/starcoder
tags:
- generated_from_trainer
model-index:
- name: starcoder-js2py-snippet1
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starcoder-js2py-snippet1
This model is a fine-tuned version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3304 | 0.17 | 25 | 0.5020 |
| 0.3296 | 0.33 | 50 | 0.2289 |
| 0.2341 | 0.5 | 75 | 0.2134 |
| 0.2193 | 0.67 | 100 | 0.2088 |
| 0.1989 | 1.02 | 125 | 0.2066 |
| 0.2187 | 1.19 | 150 | 0.2059 |
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
matgu23/rlst
|
matgu23
| 2023-08-26T05:52:56Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-26T05:47:59Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### rlst Dreambooth model trained by matgu23 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
dkqjrm/20230826130711
|
dkqjrm
| 2023-08-26T05:26:14Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T04:07:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826130711'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826130711
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2867
- Accuracy: 0.62
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.2952 | 0.64 |
| No log | 2.0 | 50 | 0.2895 | 0.57 |
| No log | 3.0 | 75 | 0.2922 | 0.61 |
| No log | 4.0 | 100 | 0.2938 | 0.64 |
| No log | 5.0 | 125 | 0.2885 | 0.63 |
| No log | 6.0 | 150 | 0.2945 | 0.48 |
| No log | 7.0 | 175 | 0.2860 | 0.67 |
| No log | 8.0 | 200 | 0.2888 | 0.66 |
| No log | 9.0 | 225 | 0.2894 | 0.51 |
| No log | 10.0 | 250 | 0.2903 | 0.56 |
| No log | 11.0 | 275 | 0.2868 | 0.66 |
| No log | 12.0 | 300 | 0.2880 | 0.66 |
| No log | 13.0 | 325 | 0.2947 | 0.54 |
| No log | 14.0 | 350 | 0.2957 | 0.64 |
| No log | 15.0 | 375 | 0.2877 | 0.66 |
| No log | 16.0 | 400 | 0.2865 | 0.68 |
| No log | 17.0 | 425 | 0.2850 | 0.69 |
| No log | 18.0 | 450 | 0.2846 | 0.66 |
| No log | 19.0 | 475 | 0.2911 | 0.59 |
| 0.4684 | 20.0 | 500 | 0.2961 | 0.64 |
| 0.4684 | 21.0 | 525 | 0.2872 | 0.63 |
| 0.4684 | 22.0 | 550 | 0.2880 | 0.64 |
| 0.4684 | 23.0 | 575 | 0.2951 | 0.51 |
| 0.4684 | 24.0 | 600 | 0.2897 | 0.64 |
| 0.4684 | 25.0 | 625 | 0.2884 | 0.64 |
| 0.4684 | 26.0 | 650 | 0.2895 | 0.64 |
| 0.4684 | 27.0 | 675 | 0.2872 | 0.61 |
| 0.4684 | 28.0 | 700 | 0.2890 | 0.64 |
| 0.4684 | 29.0 | 725 | 0.2887 | 0.66 |
| 0.4684 | 30.0 | 750 | 0.2886 | 0.63 |
| 0.4684 | 31.0 | 775 | 0.2875 | 0.6 |
| 0.4684 | 32.0 | 800 | 0.2882 | 0.65 |
| 0.4684 | 33.0 | 825 | 0.2886 | 0.58 |
| 0.4684 | 34.0 | 850 | 0.2970 | 0.64 |
| 0.4684 | 35.0 | 875 | 0.2875 | 0.59 |
| 0.4684 | 36.0 | 900 | 0.2888 | 0.63 |
| 0.4684 | 37.0 | 925 | 0.2868 | 0.63 |
| 0.4684 | 38.0 | 950 | 0.2863 | 0.64 |
| 0.4684 | 39.0 | 975 | 0.2911 | 0.63 |
| 0.4634 | 40.0 | 1000 | 0.2867 | 0.63 |
| 0.4634 | 41.0 | 1025 | 0.2936 | 0.54 |
| 0.4634 | 42.0 | 1050 | 0.2965 | 0.6 |
| 0.4634 | 43.0 | 1075 | 0.2872 | 0.62 |
| 0.4634 | 44.0 | 1100 | 0.2862 | 0.65 |
| 0.4634 | 45.0 | 1125 | 0.2871 | 0.65 |
| 0.4634 | 46.0 | 1150 | 0.2914 | 0.63 |
| 0.4634 | 47.0 | 1175 | 0.2925 | 0.64 |
| 0.4634 | 48.0 | 1200 | 0.2883 | 0.64 |
| 0.4634 | 49.0 | 1225 | 0.2896 | 0.65 |
| 0.4634 | 50.0 | 1250 | 0.2866 | 0.64 |
| 0.4634 | 51.0 | 1275 | 0.2857 | 0.64 |
| 0.4634 | 52.0 | 1300 | 0.2892 | 0.64 |
| 0.4634 | 53.0 | 1325 | 0.2861 | 0.65 |
| 0.4634 | 54.0 | 1350 | 0.2861 | 0.63 |
| 0.4634 | 55.0 | 1375 | 0.2872 | 0.65 |
| 0.4634 | 56.0 | 1400 | 0.2861 | 0.64 |
| 0.4634 | 57.0 | 1425 | 0.2865 | 0.65 |
| 0.4634 | 58.0 | 1450 | 0.2880 | 0.63 |
| 0.4634 | 59.0 | 1475 | 0.2898 | 0.63 |
| 0.4583 | 60.0 | 1500 | 0.2900 | 0.63 |
| 0.4583 | 61.0 | 1525 | 0.2896 | 0.64 |
| 0.4583 | 62.0 | 1550 | 0.2886 | 0.63 |
| 0.4583 | 63.0 | 1575 | 0.2888 | 0.63 |
| 0.4583 | 64.0 | 1600 | 0.2891 | 0.64 |
| 0.4583 | 65.0 | 1625 | 0.2874 | 0.63 |
| 0.4583 | 66.0 | 1650 | 0.2875 | 0.62 |
| 0.4583 | 67.0 | 1675 | 0.2882 | 0.62 |
| 0.4583 | 68.0 | 1700 | 0.2863 | 0.62 |
| 0.4583 | 69.0 | 1725 | 0.2867 | 0.63 |
| 0.4583 | 70.0 | 1750 | 0.2865 | 0.64 |
| 0.4583 | 71.0 | 1775 | 0.2863 | 0.64 |
| 0.4583 | 72.0 | 1800 | 0.2862 | 0.64 |
| 0.4583 | 73.0 | 1825 | 0.2864 | 0.64 |
| 0.4583 | 74.0 | 1850 | 0.2862 | 0.64 |
| 0.4583 | 75.0 | 1875 | 0.2866 | 0.64 |
| 0.4583 | 76.0 | 1900 | 0.2868 | 0.63 |
| 0.4583 | 77.0 | 1925 | 0.2866 | 0.63 |
| 0.4583 | 78.0 | 1950 | 0.2867 | 0.63 |
| 0.4583 | 79.0 | 1975 | 0.2867 | 0.62 |
| 0.4597 | 80.0 | 2000 | 0.2867 | 0.62 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
leiniscool/minarealvoice
|
leiniscool
| 2023-08-26T05:18:10Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-26T05:16:00Z |
---
license: bigscience-openrail-m
---
|
MStarn/q-FrozenLake-v1-4x4-noSlippery
|
MStarn
| 2023-08-26T05:14:36Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-26T04:46:44Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="MStarn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kldsnflewf/llama2-7b-qlora-finetunined-openassistant-guanaco
|
kldsnflewf
| 2023-08-26T05:09:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-24T19:32:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
dkqjrm/20230826123019
|
dkqjrm
| 2023-08-26T05:01:06Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T03:30:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826123019'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826123019
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5900
- Accuracy: 0.65
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.6011 | 0.66 |
| No log | 2.0 | 50 | 0.5991 | 0.65 |
| No log | 3.0 | 75 | 0.5983 | 0.65 |
| No log | 4.0 | 100 | 0.6063 | 0.65 |
| No log | 5.0 | 125 | 0.5973 | 0.65 |
| No log | 6.0 | 150 | 0.6049 | 0.65 |
| No log | 7.0 | 175 | 0.6031 | 0.65 |
| No log | 8.0 | 200 | 0.6001 | 0.65 |
| No log | 9.0 | 225 | 0.5969 | 0.64 |
| No log | 10.0 | 250 | 0.6007 | 0.65 |
| No log | 11.0 | 275 | 0.6016 | 0.65 |
| No log | 12.0 | 300 | 0.5992 | 0.65 |
| No log | 13.0 | 325 | 0.5968 | 0.65 |
| No log | 14.0 | 350 | 0.5968 | 0.65 |
| No log | 15.0 | 375 | 0.6000 | 0.65 |
| No log | 16.0 | 400 | 0.6000 | 0.65 |
| No log | 17.0 | 425 | 0.5883 | 0.66 |
| No log | 18.0 | 450 | 0.5920 | 0.65 |
| No log | 19.0 | 475 | 0.6035 | 0.62 |
| 0.6519 | 20.0 | 500 | 0.6075 | 0.64 |
| 0.6519 | 21.0 | 525 | 0.5919 | 0.65 |
| 0.6519 | 22.0 | 550 | 0.5951 | 0.63 |
| 0.6519 | 23.0 | 575 | 0.6037 | 0.61 |
| 0.6519 | 24.0 | 600 | 0.6058 | 0.62 |
| 0.6519 | 25.0 | 625 | 0.5944 | 0.65 |
| 0.6519 | 26.0 | 650 | 0.5938 | 0.65 |
| 0.6519 | 27.0 | 675 | 0.5909 | 0.66 |
| 0.6519 | 28.0 | 700 | 0.5914 | 0.65 |
| 0.6519 | 29.0 | 725 | 0.5902 | 0.66 |
| 0.6519 | 30.0 | 750 | 0.5906 | 0.66 |
| 0.6519 | 31.0 | 775 | 0.5936 | 0.65 |
| 0.6519 | 32.0 | 800 | 0.5960 | 0.66 |
| 0.6519 | 33.0 | 825 | 0.5953 | 0.65 |
| 0.6519 | 34.0 | 850 | 0.5970 | 0.65 |
| 0.6519 | 35.0 | 875 | 0.5937 | 0.65 |
| 0.6519 | 36.0 | 900 | 0.5954 | 0.64 |
| 0.6519 | 37.0 | 925 | 0.5993 | 0.63 |
| 0.6519 | 38.0 | 950 | 0.5905 | 0.65 |
| 0.6519 | 39.0 | 975 | 0.5898 | 0.65 |
| 0.6395 | 40.0 | 1000 | 0.5947 | 0.65 |
| 0.6395 | 41.0 | 1025 | 0.5966 | 0.64 |
| 0.6395 | 42.0 | 1050 | 0.5953 | 0.65 |
| 0.6395 | 43.0 | 1075 | 0.5968 | 0.64 |
| 0.6395 | 44.0 | 1100 | 0.5934 | 0.65 |
| 0.6395 | 45.0 | 1125 | 0.5948 | 0.66 |
| 0.6395 | 46.0 | 1150 | 0.5958 | 0.65 |
| 0.6395 | 47.0 | 1175 | 0.5928 | 0.65 |
| 0.6395 | 48.0 | 1200 | 0.5922 | 0.65 |
| 0.6395 | 49.0 | 1225 | 0.5929 | 0.65 |
| 0.6395 | 50.0 | 1250 | 0.5967 | 0.64 |
| 0.6395 | 51.0 | 1275 | 0.5908 | 0.65 |
| 0.6395 | 52.0 | 1300 | 0.5930 | 0.66 |
| 0.6395 | 53.0 | 1325 | 0.5910 | 0.65 |
| 0.6395 | 54.0 | 1350 | 0.5931 | 0.65 |
| 0.6395 | 55.0 | 1375 | 0.5900 | 0.66 |
| 0.6395 | 56.0 | 1400 | 0.5925 | 0.65 |
| 0.6395 | 57.0 | 1425 | 0.5938 | 0.66 |
| 0.6395 | 58.0 | 1450 | 0.5963 | 0.65 |
| 0.6395 | 59.0 | 1475 | 0.5955 | 0.64 |
| 0.6331 | 60.0 | 1500 | 0.5935 | 0.65 |
| 0.6331 | 61.0 | 1525 | 0.5937 | 0.66 |
| 0.6331 | 62.0 | 1550 | 0.5924 | 0.65 |
| 0.6331 | 63.0 | 1575 | 0.5909 | 0.65 |
| 0.6331 | 64.0 | 1600 | 0.5891 | 0.65 |
| 0.6331 | 65.0 | 1625 | 0.5881 | 0.65 |
| 0.6331 | 66.0 | 1650 | 0.5884 | 0.65 |
| 0.6331 | 67.0 | 1675 | 0.5893 | 0.65 |
| 0.6331 | 68.0 | 1700 | 0.5900 | 0.65 |
| 0.6331 | 69.0 | 1725 | 0.5908 | 0.65 |
| 0.6331 | 70.0 | 1750 | 0.5912 | 0.65 |
| 0.6331 | 71.0 | 1775 | 0.5914 | 0.65 |
| 0.6331 | 72.0 | 1800 | 0.5901 | 0.65 |
| 0.6331 | 73.0 | 1825 | 0.5898 | 0.65 |
| 0.6331 | 74.0 | 1850 | 0.5896 | 0.65 |
| 0.6331 | 75.0 | 1875 | 0.5905 | 0.65 |
| 0.6331 | 76.0 | 1900 | 0.5901 | 0.65 |
| 0.6331 | 77.0 | 1925 | 0.5901 | 0.65 |
| 0.6331 | 78.0 | 1950 | 0.5900 | 0.65 |
| 0.6331 | 79.0 | 1975 | 0.5900 | 0.65 |
| 0.6276 | 80.0 | 2000 | 0.5900 | 0.65 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mangostin2010/KangLuda
|
mangostin2010
| 2023-08-26T04:49:14Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T04:49:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
TFMUNIR/distilbert-base-uncased-finetuned-emotion-movies-186k
|
TFMUNIR
| 2023-08-26T04:42:52Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-14T22:00:46Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-movies-186k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-movies-186k
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a 186k movie reviews/emotions self-collected dataset from 1150 movies from TMDB.
It achieves the following results on the evaluation set:
- Loss: 0.3572
- Accuracy: 0.8635
- F1: 0.8637
## Model description
The model classifies into the following emotions:
- 'LABEL_0': 'sadness'
- 'LABEL_1': 'joy'
- 'LABEL_2': 'love'
- 'LABEL_3': 'anger'
- 'LABEL_4': 'fear'
- 'LABEL_5': 'surprise'
## Intended uses & limitations
Academic
## Training and evaluation data
The model was trained with a dataset (186k rows) of movies reviews/emotions from 1150 movies from TMDB, taking 20% for testing.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4956 | 1.0 | 5828 | 0.3770 | 0.8531 | 0.8513 |
| 0.3035 | 2.0 | 11656 | 0.3572 | 0.8635 | 0.8637 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
gruber/e5-small-v2-ggml
|
gruber
| 2023-08-26T04:41:34Z | 0 | 0 | null |
[
"bert",
"mteb",
"bert.cpp",
"ggml",
"sentence-similarity",
"en",
"arxiv:2212.03533",
"license:mit",
"region:us"
] |
sentence-similarity
| 2023-08-26T03:01:48Z |
---
license: mit
language:
- en
pipeline_tag: sentence-similarity
tags:
- bert
- mteb
- bert.cpp
- ggml
---
# Model details
This repository contains the files used on [intfloat/e5-small-v2](https://huggingface.co/intfloat/e5-small-v2) converted to **GGML** to be used on the [bert.cpp backend](https://github.com/skeskinen/bert.cpp).
> - [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
> - Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
> - This model has 12 layers and the embedding size is 384.
---
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
|
dkqjrm/20230826121217
|
dkqjrm
| 2023-08-26T04:30:51Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T03:12:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826121217'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826121217
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4150
- Accuracy: 0.63
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.4146 | 0.66 |
| No log | 2.0 | 50 | 0.4116 | 0.66 |
| No log | 3.0 | 75 | 0.4139 | 0.66 |
| No log | 4.0 | 100 | 0.4170 | 0.64 |
| No log | 5.0 | 125 | 0.4182 | 0.65 |
| No log | 6.0 | 150 | 0.4208 | 0.57 |
| No log | 7.0 | 175 | 0.4115 | 0.66 |
| No log | 8.0 | 200 | 0.4157 | 0.66 |
| No log | 9.0 | 225 | 0.4229 | 0.64 |
| No log | 10.0 | 250 | 0.4205 | 0.65 |
| No log | 11.0 | 275 | 0.4178 | 0.64 |
| No log | 12.0 | 300 | 0.4131 | 0.67 |
| No log | 13.0 | 325 | 0.4146 | 0.65 |
| No log | 14.0 | 350 | 0.4202 | 0.63 |
| No log | 15.0 | 375 | 0.4331 | 0.62 |
| No log | 16.0 | 400 | 0.4120 | 0.66 |
| No log | 17.0 | 425 | 0.4144 | 0.63 |
| No log | 18.0 | 450 | 0.4182 | 0.64 |
| No log | 19.0 | 475 | 0.4184 | 0.59 |
| 0.5392 | 20.0 | 500 | 0.4161 | 0.65 |
| 0.5392 | 21.0 | 525 | 0.4185 | 0.64 |
| 0.5392 | 22.0 | 550 | 0.4187 | 0.59 |
| 0.5392 | 23.0 | 575 | 0.4186 | 0.62 |
| 0.5392 | 24.0 | 600 | 0.4159 | 0.65 |
| 0.5392 | 25.0 | 625 | 0.4152 | 0.64 |
| 0.5392 | 26.0 | 650 | 0.4151 | 0.62 |
| 0.5392 | 27.0 | 675 | 0.4136 | 0.63 |
| 0.5392 | 28.0 | 700 | 0.4190 | 0.65 |
| 0.5392 | 29.0 | 725 | 0.4225 | 0.61 |
| 0.5392 | 30.0 | 750 | 0.4209 | 0.57 |
| 0.5392 | 31.0 | 775 | 0.4167 | 0.63 |
| 0.5392 | 32.0 | 800 | 0.4153 | 0.62 |
| 0.5392 | 33.0 | 825 | 0.4236 | 0.6 |
| 0.5392 | 34.0 | 850 | 0.4191 | 0.58 |
| 0.5392 | 35.0 | 875 | 0.4160 | 0.61 |
| 0.5392 | 36.0 | 900 | 0.4163 | 0.62 |
| 0.5392 | 37.0 | 925 | 0.4193 | 0.59 |
| 0.5392 | 38.0 | 950 | 0.4208 | 0.62 |
| 0.5392 | 39.0 | 975 | 0.4163 | 0.6 |
| 0.5359 | 40.0 | 1000 | 0.4159 | 0.6 |
| 0.5359 | 41.0 | 1025 | 0.4146 | 0.62 |
| 0.5359 | 42.0 | 1050 | 0.4158 | 0.6 |
| 0.5359 | 43.0 | 1075 | 0.4211 | 0.59 |
| 0.5359 | 44.0 | 1100 | 0.4203 | 0.59 |
| 0.5359 | 45.0 | 1125 | 0.4217 | 0.57 |
| 0.5359 | 46.0 | 1150 | 0.4183 | 0.6 |
| 0.5359 | 47.0 | 1175 | 0.4138 | 0.63 |
| 0.5359 | 48.0 | 1200 | 0.4124 | 0.63 |
| 0.5359 | 49.0 | 1225 | 0.4140 | 0.63 |
| 0.5359 | 50.0 | 1250 | 0.4118 | 0.64 |
| 0.5359 | 51.0 | 1275 | 0.4137 | 0.62 |
| 0.5359 | 52.0 | 1300 | 0.4113 | 0.63 |
| 0.5359 | 53.0 | 1325 | 0.4112 | 0.62 |
| 0.5359 | 54.0 | 1350 | 0.4140 | 0.63 |
| 0.5359 | 55.0 | 1375 | 0.4129 | 0.64 |
| 0.5359 | 56.0 | 1400 | 0.4151 | 0.64 |
| 0.5359 | 57.0 | 1425 | 0.4155 | 0.63 |
| 0.5359 | 58.0 | 1450 | 0.4140 | 0.63 |
| 0.5359 | 59.0 | 1475 | 0.4145 | 0.64 |
| 0.5347 | 60.0 | 1500 | 0.4158 | 0.63 |
| 0.5347 | 61.0 | 1525 | 0.4148 | 0.62 |
| 0.5347 | 62.0 | 1550 | 0.4147 | 0.6 |
| 0.5347 | 63.0 | 1575 | 0.4153 | 0.64 |
| 0.5347 | 64.0 | 1600 | 0.4156 | 0.63 |
| 0.5347 | 65.0 | 1625 | 0.4152 | 0.64 |
| 0.5347 | 66.0 | 1650 | 0.4146 | 0.64 |
| 0.5347 | 67.0 | 1675 | 0.4151 | 0.64 |
| 0.5347 | 68.0 | 1700 | 0.4145 | 0.61 |
| 0.5347 | 69.0 | 1725 | 0.4153 | 0.61 |
| 0.5347 | 70.0 | 1750 | 0.4147 | 0.64 |
| 0.5347 | 71.0 | 1775 | 0.4146 | 0.64 |
| 0.5347 | 72.0 | 1800 | 0.4134 | 0.62 |
| 0.5347 | 73.0 | 1825 | 0.4140 | 0.63 |
| 0.5347 | 74.0 | 1850 | 0.4141 | 0.64 |
| 0.5347 | 75.0 | 1875 | 0.4151 | 0.63 |
| 0.5347 | 76.0 | 1900 | 0.4150 | 0.62 |
| 0.5347 | 77.0 | 1925 | 0.4148 | 0.61 |
| 0.5347 | 78.0 | 1950 | 0.4149 | 0.62 |
| 0.5347 | 79.0 | 1975 | 0.4150 | 0.63 |
| 0.5285 | 80.0 | 2000 | 0.4150 | 0.63 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Yorai/detr-resnet-50_finetuned_cppe5
|
Yorai
| 2023-08-26T04:28:16Z | 190 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cppe-5",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-08-26T03:26:39Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- cppe-5
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cppe-5 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
tranvancuong2597/q-FrozenLake-v1-4x4-noSlippery
|
tranvancuong2597
| 2023-08-26T04:27:00Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-26T04:26:58Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tranvancuong2597/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AIBunCho/japanese-novel-gpt-j-6b
|
AIBunCho
| 2023-08-26T04:20:51Z | 78 | 36 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"ja",
"dataset:cc100",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-11T00:52:32Z |
---
license: openrail
datasets:
- cc100
language:
- ja
pipeline_tag: text-generation
---
# AIBunCho/japanese-novel-gpt-j-6b
[AI BunCho](https://bun-cho.work/)で利用しているモデルです。2021年に作った小説用言語モデルです。
## Model Details
GPT-J-6BをTPUで2週間日本語tokenizerを用いて日本語データで事前学習し、その後2週間小説データで転移学習したものです。
## Uses
Google colabのT4 High-RAMで動作確認しています。
```
pip install transformers sentencepiece accelerate
```
```python
from transformers import GPTJForCausalLM, AlbertTokenizer
import torch
tokenizer = AlbertTokenizer.from_pretrained('AIBunCho/japanese-novel-gpt-j-6b', keep_accents=True, remove_space=False)
model = GPTJForCausalLM.from_pretrained("AIBunCho/japanese-novel-gpt-j-6b", torch_dtype=torch.float16, low_cpu_mem_usage=True)
model.half()
model.eval()
if torch.cuda.is_available():
model = model.to("cuda")
prompt = """
わたくしといふ現象は
""".strip()
input_ids = tokenizer.encode(
prompt,
add_special_tokens=False,
return_tensors="pt"
).cuda()
# this is for reproducibility.
# feel free to change to get different result
seed = 27
torch.manual_seed(seed)
tokens = model.generate(
input_ids.to(device=model.device),
max_new_tokens=32,
temperature=0.6,
top_p=0.9,
repetition_penalty=1.2,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
out = tokenizer.decode(tokens[0], skip_special_tokens=True)
print(out)
"""わたくしといふ現象は、その因果律を断ち切ることができるのです。"""
```
## Bias, Risks, and Limitations
The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
### Training Data
cc100の日本語データ
Wikipedia
その他webデータ
## Author
X(旧Twitter): [@OsoneHiroyuki](https://twitter.com/OsoneHiroyuki)
## Acknowledgements
[Google TPU research cloud](https://sites.research.google/trc/about/)の支援を受けて学習を行いました。
## Appendix
2023/08/26追記
AIBunCho/japanese-novel-gpt-j-6bの1000DLを記念してAI BunChoプランの50%オフクーポンを配布しています
【HF1000DL】を入力するとどのプランでも50%オフになります
|
dt-and-vanilla-ardt/ardt-vanilla-combo_train_walker2d_v2-2608_0328-66
|
dt-and-vanilla-ardt
| 2023-08-26T04:19:44Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T02:29:56Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-combo_train_walker2d_v2-2608_0328-66
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-combo_train_walker2d_v2-2608_0328-66
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230826114726
|
dkqjrm
| 2023-08-26T04:06:58Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T02:47:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826114726'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826114726
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2883
- Accuracy: 0.59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.2910 | 0.6 |
| No log | 2.0 | 50 | 0.2911 | 0.64 |
| No log | 3.0 | 75 | 0.2875 | 0.65 |
| No log | 4.0 | 100 | 0.2909 | 0.62 |
| No log | 5.0 | 125 | 0.2935 | 0.62 |
| No log | 6.0 | 150 | 0.2977 | 0.58 |
| No log | 7.0 | 175 | 0.2854 | 0.65 |
| No log | 8.0 | 200 | 0.2900 | 0.65 |
| No log | 9.0 | 225 | 0.2985 | 0.53 |
| No log | 10.0 | 250 | 0.2906 | 0.64 |
| No log | 11.0 | 275 | 0.2979 | 0.63 |
| No log | 12.0 | 300 | 0.2891 | 0.63 |
| No log | 13.0 | 325 | 0.2885 | 0.63 |
| No log | 14.0 | 350 | 0.2904 | 0.64 |
| No log | 15.0 | 375 | 0.3056 | 0.58 |
| No log | 16.0 | 400 | 0.2860 | 0.65 |
| No log | 17.0 | 425 | 0.2887 | 0.62 |
| No log | 18.0 | 450 | 0.2968 | 0.59 |
| No log | 19.0 | 475 | 0.2927 | 0.51 |
| 0.4646 | 20.0 | 500 | 0.2887 | 0.59 |
| 0.4646 | 21.0 | 525 | 0.2917 | 0.62 |
| 0.4646 | 22.0 | 550 | 0.2940 | 0.53 |
| 0.4646 | 23.0 | 575 | 0.2914 | 0.58 |
| 0.4646 | 24.0 | 600 | 0.2875 | 0.61 |
| 0.4646 | 25.0 | 625 | 0.2928 | 0.63 |
| 0.4646 | 26.0 | 650 | 0.2887 | 0.57 |
| 0.4646 | 27.0 | 675 | 0.2871 | 0.58 |
| 0.4646 | 28.0 | 700 | 0.2925 | 0.64 |
| 0.4646 | 29.0 | 725 | 0.2963 | 0.6 |
| 0.4646 | 30.0 | 750 | 0.2922 | 0.56 |
| 0.4646 | 31.0 | 775 | 0.2902 | 0.59 |
| 0.4646 | 32.0 | 800 | 0.2885 | 0.59 |
| 0.4646 | 33.0 | 825 | 0.2940 | 0.57 |
| 0.4646 | 34.0 | 850 | 0.2912 | 0.53 |
| 0.4646 | 35.0 | 875 | 0.2879 | 0.59 |
| 0.4646 | 36.0 | 900 | 0.2880 | 0.59 |
| 0.4646 | 37.0 | 925 | 0.2945 | 0.47 |
| 0.4646 | 38.0 | 950 | 0.2918 | 0.6 |
| 0.4646 | 39.0 | 975 | 0.2887 | 0.58 |
| 0.4656 | 40.0 | 1000 | 0.2874 | 0.59 |
| 0.4656 | 41.0 | 1025 | 0.2898 | 0.56 |
| 0.4656 | 42.0 | 1050 | 0.2897 | 0.59 |
| 0.4656 | 43.0 | 1075 | 0.2924 | 0.5 |
| 0.4656 | 44.0 | 1100 | 0.2898 | 0.58 |
| 0.4656 | 45.0 | 1125 | 0.2921 | 0.58 |
| 0.4656 | 46.0 | 1150 | 0.2895 | 0.56 |
| 0.4656 | 47.0 | 1175 | 0.2862 | 0.59 |
| 0.4656 | 48.0 | 1200 | 0.2869 | 0.57 |
| 0.4656 | 49.0 | 1225 | 0.2855 | 0.61 |
| 0.4656 | 50.0 | 1250 | 0.2859 | 0.59 |
| 0.4656 | 51.0 | 1275 | 0.2899 | 0.58 |
| 0.4656 | 52.0 | 1300 | 0.2851 | 0.59 |
| 0.4656 | 53.0 | 1325 | 0.2852 | 0.61 |
| 0.4656 | 54.0 | 1350 | 0.2887 | 0.6 |
| 0.4656 | 55.0 | 1375 | 0.2870 | 0.59 |
| 0.4656 | 56.0 | 1400 | 0.2895 | 0.63 |
| 0.4656 | 57.0 | 1425 | 0.2893 | 0.62 |
| 0.4656 | 58.0 | 1450 | 0.2891 | 0.63 |
| 0.4656 | 59.0 | 1475 | 0.2890 | 0.62 |
| 0.4637 | 60.0 | 1500 | 0.2890 | 0.62 |
| 0.4637 | 61.0 | 1525 | 0.2883 | 0.59 |
| 0.4637 | 62.0 | 1550 | 0.2882 | 0.58 |
| 0.4637 | 63.0 | 1575 | 0.2883 | 0.63 |
| 0.4637 | 64.0 | 1600 | 0.2884 | 0.59 |
| 0.4637 | 65.0 | 1625 | 0.2876 | 0.63 |
| 0.4637 | 66.0 | 1650 | 0.2871 | 0.62 |
| 0.4637 | 67.0 | 1675 | 0.2879 | 0.6 |
| 0.4637 | 68.0 | 1700 | 0.2879 | 0.58 |
| 0.4637 | 69.0 | 1725 | 0.2877 | 0.59 |
| 0.4637 | 70.0 | 1750 | 0.2871 | 0.6 |
| 0.4637 | 71.0 | 1775 | 0.2875 | 0.6 |
| 0.4637 | 72.0 | 1800 | 0.2870 | 0.59 |
| 0.4637 | 73.0 | 1825 | 0.2875 | 0.59 |
| 0.4637 | 74.0 | 1850 | 0.2879 | 0.59 |
| 0.4637 | 75.0 | 1875 | 0.2887 | 0.59 |
| 0.4637 | 76.0 | 1900 | 0.2883 | 0.59 |
| 0.4637 | 77.0 | 1925 | 0.2882 | 0.58 |
| 0.4637 | 78.0 | 1950 | 0.2883 | 0.59 |
| 0.4637 | 79.0 | 1975 | 0.2884 | 0.59 |
| 0.4587 | 80.0 | 2000 | 0.2883 | 0.59 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230826105641
|
dkqjrm
| 2023-08-26T03:30:05Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T01:56:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826105641'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826105641
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6024
- Accuracy: 0.64
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.6078 | 0.65 |
| No log | 2.0 | 50 | 0.5963 | 0.66 |
| No log | 3.0 | 75 | 0.6125 | 0.65 |
| No log | 4.0 | 100 | 0.6042 | 0.66 |
| No log | 5.0 | 125 | 0.6065 | 0.66 |
| No log | 6.0 | 150 | 0.6020 | 0.65 |
| No log | 7.0 | 175 | 0.5987 | 0.65 |
| No log | 8.0 | 200 | 0.6016 | 0.66 |
| No log | 9.0 | 225 | 0.6066 | 0.66 |
| No log | 10.0 | 250 | 0.6112 | 0.66 |
| No log | 11.0 | 275 | 0.6085 | 0.66 |
| No log | 12.0 | 300 | 0.5976 | 0.66 |
| No log | 13.0 | 325 | 0.6074 | 0.66 |
| No log | 14.0 | 350 | 0.6060 | 0.65 |
| No log | 15.0 | 375 | 0.6254 | 0.65 |
| No log | 16.0 | 400 | 0.6031 | 0.66 |
| No log | 17.0 | 425 | 0.6011 | 0.67 |
| No log | 18.0 | 450 | 0.6063 | 0.66 |
| No log | 19.0 | 475 | 0.6031 | 0.65 |
| 0.6484 | 20.0 | 500 | 0.6013 | 0.65 |
| 0.6484 | 21.0 | 525 | 0.6041 | 0.65 |
| 0.6484 | 22.0 | 550 | 0.6037 | 0.65 |
| 0.6484 | 23.0 | 575 | 0.6046 | 0.65 |
| 0.6484 | 24.0 | 600 | 0.6072 | 0.66 |
| 0.6484 | 25.0 | 625 | 0.5980 | 0.66 |
| 0.6484 | 26.0 | 650 | 0.6039 | 0.64 |
| 0.6484 | 27.0 | 675 | 0.6025 | 0.65 |
| 0.6484 | 28.0 | 700 | 0.6062 | 0.65 |
| 0.6484 | 29.0 | 725 | 0.6056 | 0.64 |
| 0.6484 | 30.0 | 750 | 0.6091 | 0.61 |
| 0.6484 | 31.0 | 775 | 0.6037 | 0.65 |
| 0.6484 | 32.0 | 800 | 0.6037 | 0.63 |
| 0.6484 | 33.0 | 825 | 0.6175 | 0.64 |
| 0.6484 | 34.0 | 850 | 0.6089 | 0.62 |
| 0.6484 | 35.0 | 875 | 0.6076 | 0.64 |
| 0.6484 | 36.0 | 900 | 0.6073 | 0.64 |
| 0.6484 | 37.0 | 925 | 0.6059 | 0.64 |
| 0.6484 | 38.0 | 950 | 0.6109 | 0.63 |
| 0.6484 | 39.0 | 975 | 0.6090 | 0.64 |
| 0.6362 | 40.0 | 1000 | 0.6080 | 0.64 |
| 0.6362 | 41.0 | 1025 | 0.5994 | 0.64 |
| 0.6362 | 42.0 | 1050 | 0.6034 | 0.64 |
| 0.6362 | 43.0 | 1075 | 0.6113 | 0.6 |
| 0.6362 | 44.0 | 1100 | 0.6131 | 0.64 |
| 0.6362 | 45.0 | 1125 | 0.6150 | 0.61 |
| 0.6362 | 46.0 | 1150 | 0.6115 | 0.63 |
| 0.6362 | 47.0 | 1175 | 0.6055 | 0.64 |
| 0.6362 | 48.0 | 1200 | 0.6033 | 0.64 |
| 0.6362 | 49.0 | 1225 | 0.6047 | 0.64 |
| 0.6362 | 50.0 | 1250 | 0.6037 | 0.64 |
| 0.6362 | 51.0 | 1275 | 0.6010 | 0.63 |
| 0.6362 | 52.0 | 1300 | 0.5988 | 0.64 |
| 0.6362 | 53.0 | 1325 | 0.5991 | 0.64 |
| 0.6362 | 54.0 | 1350 | 0.6019 | 0.64 |
| 0.6362 | 55.0 | 1375 | 0.6002 | 0.64 |
| 0.6362 | 56.0 | 1400 | 0.6006 | 0.64 |
| 0.6362 | 57.0 | 1425 | 0.5992 | 0.63 |
| 0.6362 | 58.0 | 1450 | 0.5992 | 0.63 |
| 0.6362 | 59.0 | 1475 | 0.5992 | 0.64 |
| 0.6341 | 60.0 | 1500 | 0.6026 | 0.64 |
| 0.6341 | 61.0 | 1525 | 0.6022 | 0.64 |
| 0.6341 | 62.0 | 1550 | 0.6026 | 0.64 |
| 0.6341 | 63.0 | 1575 | 0.6036 | 0.64 |
| 0.6341 | 64.0 | 1600 | 0.6039 | 0.64 |
| 0.6341 | 65.0 | 1625 | 0.6041 | 0.64 |
| 0.6341 | 66.0 | 1650 | 0.6034 | 0.64 |
| 0.6341 | 67.0 | 1675 | 0.6049 | 0.64 |
| 0.6341 | 68.0 | 1700 | 0.6027 | 0.64 |
| 0.6341 | 69.0 | 1725 | 0.6057 | 0.64 |
| 0.6341 | 70.0 | 1750 | 0.6056 | 0.64 |
| 0.6341 | 71.0 | 1775 | 0.6048 | 0.64 |
| 0.6341 | 72.0 | 1800 | 0.6019 | 0.64 |
| 0.6341 | 73.0 | 1825 | 0.6021 | 0.64 |
| 0.6341 | 74.0 | 1850 | 0.6018 | 0.64 |
| 0.6341 | 75.0 | 1875 | 0.6027 | 0.64 |
| 0.6341 | 76.0 | 1900 | 0.6025 | 0.64 |
| 0.6341 | 77.0 | 1925 | 0.6021 | 0.64 |
| 0.6341 | 78.0 | 1950 | 0.6023 | 0.64 |
| 0.6341 | 79.0 | 1975 | 0.6024 | 0.64 |
| 0.626 | 80.0 | 2000 | 0.6024 | 0.64 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dave-does-data/utsa_dpo_llama2
|
dave-does-data
| 2023-08-26T03:25:38Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T03:25:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
sw882882/llama2-7b-molora8-openplatypus-6
|
sw882882
| 2023-08-26T03:25:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T03:16:28Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
sw882882/llama2-7b-molora8-openplatypus-7
|
sw882882
| 2023-08-26T03:24:58Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T03:16:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
sw882882/llama2-7b-molora8-openplatypus-4
|
sw882882
| 2023-08-26T03:24:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T03:16:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
sw882882/llama2-7b-molora8-openplatypus-1
|
sw882882
| 2023-08-26T03:22:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T03:15:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
sw882882/llama2-7b-molora8-openplatypus-0
|
sw882882
| 2023-08-26T03:21:41Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T03:15:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
nikinetrahutama/afx-ai-llama-chat-model-14-1
|
nikinetrahutama
| 2023-08-26T03:12:50Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T03:12:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
danwein8/Hackathon-Art
|
danwein8
| 2023-08-26T03:04:35Z | 0 | 0 | null |
[
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"dreambooth-hackathon",
"wildcard",
"text-to-image",
"dataset:BirdL/NGA_Art",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-25T16:32:22Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- dreambooth-hackathon
- wildcard
- text-to-image
datasets: BirdL/NGA_Art
inference: true
---
# Hackathon Art Model Card
TL;DR:Hackathon Art is a Dreambooth model trained from public domain images from the National Art Gallery. The token is sks.
# Model Pretraining
This model is trained on top [Stable Diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
# Data
The data for H-A is located on [this page](https://huggingface.co/datasets/BirdL/NGA_Art) and was scraped from [Wikimedia Commons]. This dataset is 500 images in size. The dataset page goes into more detail.
|
dkqjrm/20230826100309
|
dkqjrm
| 2023-08-26T02:47:14Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T01:03:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826100309'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826100309
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2920
- Accuracy: 0.4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.3608 | 0.44 |
| No log | 2.0 | 50 | 0.2890 | 0.57 |
| No log | 3.0 | 75 | 0.2961 | 0.58 |
| No log | 4.0 | 100 | 0.2865 | 0.65 |
| No log | 5.0 | 125 | 0.2901 | 0.58 |
| No log | 6.0 | 150 | 0.2933 | 0.46 |
| No log | 7.0 | 175 | 0.3291 | 0.64 |
| No log | 8.0 | 200 | 0.2864 | 0.62 |
| No log | 9.0 | 225 | 0.2979 | 0.42 |
| No log | 10.0 | 250 | 0.3035 | 0.63 |
| No log | 11.0 | 275 | 0.2902 | 0.59 |
| No log | 12.0 | 300 | 0.2917 | 0.5 |
| No log | 13.0 | 325 | 0.2935 | 0.44 |
| No log | 14.0 | 350 | 0.3057 | 0.44 |
| No log | 15.0 | 375 | 0.2980 | 0.45 |
| No log | 16.0 | 400 | 0.2947 | 0.47 |
| No log | 17.0 | 425 | 0.2945 | 0.5 |
| No log | 18.0 | 450 | 0.2924 | 0.49 |
| No log | 19.0 | 475 | 0.2922 | 0.55 |
| 1.1902 | 20.0 | 500 | 0.2923 | 0.45 |
| 1.1902 | 21.0 | 525 | 0.2864 | 0.55 |
| 1.1902 | 22.0 | 550 | 0.2925 | 0.42 |
| 1.1902 | 23.0 | 575 | 0.2910 | 0.58 |
| 1.1902 | 24.0 | 600 | 0.2895 | 0.58 |
| 1.1902 | 25.0 | 625 | 0.2918 | 0.62 |
| 1.1902 | 26.0 | 650 | 0.2921 | 0.42 |
| 1.1902 | 27.0 | 675 | 0.2918 | 0.58 |
| 1.1902 | 28.0 | 700 | 0.2910 | 0.6 |
| 1.1902 | 29.0 | 725 | 0.2919 | 0.57 |
| 1.1902 | 30.0 | 750 | 0.2920 | 0.48 |
| 1.1902 | 31.0 | 775 | 0.2922 | 0.41 |
| 1.1902 | 32.0 | 800 | 0.2920 | 0.53 |
| 1.1902 | 33.0 | 825 | 0.2920 | 0.51 |
| 1.1902 | 34.0 | 850 | 0.2919 | 0.54 |
| 1.1902 | 35.0 | 875 | 0.2920 | 0.52 |
| 1.1902 | 36.0 | 900 | 0.2921 | 0.39 |
| 1.1902 | 37.0 | 925 | 0.2920 | 0.53 |
| 1.1902 | 38.0 | 950 | 0.2920 | 0.49 |
| 1.1902 | 39.0 | 975 | 0.2922 | 0.4 |
| 0.8276 | 40.0 | 1000 | 0.2919 | 0.58 |
| 0.8276 | 41.0 | 1025 | 0.2918 | 0.62 |
| 0.8276 | 42.0 | 1050 | 0.2918 | 0.61 |
| 0.8276 | 43.0 | 1075 | 0.2922 | 0.42 |
| 0.8276 | 44.0 | 1100 | 0.2921 | 0.43 |
| 0.8276 | 45.0 | 1125 | 0.2920 | 0.42 |
| 0.8276 | 46.0 | 1150 | 0.2920 | 0.42 |
| 0.8276 | 47.0 | 1175 | 0.2920 | 0.35 |
| 0.8276 | 48.0 | 1200 | 0.2920 | 0.54 |
| 0.8276 | 49.0 | 1225 | 0.2920 | 0.6 |
| 0.8276 | 50.0 | 1250 | 0.2920 | 0.52 |
| 0.8276 | 51.0 | 1275 | 0.2920 | 0.37 |
| 0.8276 | 52.0 | 1300 | 0.2920 | 0.45 |
| 0.8276 | 53.0 | 1325 | 0.2920 | 0.44 |
| 0.8276 | 54.0 | 1350 | 0.2920 | 0.59 |
| 0.8276 | 55.0 | 1375 | 0.2920 | 0.44 |
| 0.8276 | 56.0 | 1400 | 0.2920 | 0.58 |
| 0.8276 | 57.0 | 1425 | 0.2920 | 0.57 |
| 0.8276 | 58.0 | 1450 | 0.2920 | 0.46 |
| 0.8276 | 59.0 | 1475 | 0.2920 | 0.42 |
| 0.6389 | 60.0 | 1500 | 0.2920 | 0.37 |
| 0.6389 | 61.0 | 1525 | 0.2919 | 0.6 |
| 0.6389 | 62.0 | 1550 | 0.2919 | 0.6 |
| 0.6389 | 63.0 | 1575 | 0.2920 | 0.55 |
| 0.6389 | 64.0 | 1600 | 0.2920 | 0.52 |
| 0.6389 | 65.0 | 1625 | 0.2920 | 0.5 |
| 0.6389 | 66.0 | 1650 | 0.2920 | 0.36 |
| 0.6389 | 67.0 | 1675 | 0.2920 | 0.58 |
| 0.6389 | 68.0 | 1700 | 0.2920 | 0.38 |
| 0.6389 | 69.0 | 1725 | 0.2920 | 0.58 |
| 0.6389 | 70.0 | 1750 | 0.2920 | 0.53 |
| 0.6389 | 71.0 | 1775 | 0.2920 | 0.37 |
| 0.6389 | 72.0 | 1800 | 0.2920 | 0.39 |
| 0.6389 | 73.0 | 1825 | 0.2920 | 0.36 |
| 0.6389 | 74.0 | 1850 | 0.2920 | 0.43 |
| 0.6389 | 75.0 | 1875 | 0.2920 | 0.38 |
| 0.6389 | 76.0 | 1900 | 0.2920 | 0.43 |
| 0.6389 | 77.0 | 1925 | 0.2920 | 0.37 |
| 0.6389 | 78.0 | 1950 | 0.2920 | 0.37 |
| 0.6389 | 79.0 | 1975 | 0.2920 | 0.38 |
| 0.5225 | 80.0 | 2000 | 0.2920 | 0.4 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
heegyu/llama-small-randomweights
|
heegyu
| 2023-08-26T02:46:53Z | 172 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-08-26T02:42:55Z |
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta
website](https://ai.meta.com/resources/models-and-libraries/llama-downloads)
and accept our license terms and acceptable use policy before submitting this
form. Requests will be processed in 1-2 days.
extra_gated_prompt: >-
**Your Hugging Face account email address MUST match the email you provide on
the Meta website, or your request will not be approved.**
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
This is 82M parameters llama model of random weights. This model can be use for proof of concept. <br/>
Tokenizer is copy of meta-llama/Llama-2-7b
```
# Use a pipeline as a high-level helper
from transformers import LlamaConfig, LlamaForCausalLM, LlamaTokenizer
import numpy as np
config = LlamaConfig(vocab_size=32000, hidden_size=768, intermediate_size=768*4, num_hidden_layers=4, num_attention_heads=8)
tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b")
model = LlamaForCausalLM(config).half()
model_parameters = filter(lambda p: p.requires_grad, model.parameters())
params = sum([np.prod(p.size()) for p in model_parameters])
print(params / 1024 / 1024) # 82.881591796875
hub_id = "heegyu/llama-small-randomweights"
tokenizer.push_to_hub(hub_id)
model.push_to_hub(hub_id)
```
|
ashwincv0112/Masters_Course_Application_Email_AI_avp
|
ashwincv0112
| 2023-08-26T02:44:29Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-26T02:44:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
dkqjrm/20230826100510
|
dkqjrm
| 2023-08-26T02:44:21Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T01:05:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826100510'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826100510
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5641
- Accuracy: 0.76
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.7263 | 0.4 |
| No log | 2.0 | 50 | 0.6115 | 0.6 |
| No log | 3.0 | 75 | 0.5427 | 0.62 |
| No log | 4.0 | 100 | 0.5319 | 0.61 |
| No log | 5.0 | 125 | 0.5818 | 0.55 |
| No log | 6.0 | 150 | 0.5093 | 0.68 |
| No log | 7.0 | 175 | 0.7841 | 0.63 |
| No log | 8.0 | 200 | 0.7629 | 0.68 |
| No log | 9.0 | 225 | 0.5874 | 0.69 |
| No log | 10.0 | 250 | 0.5228 | 0.71 |
| No log | 11.0 | 275 | 0.8439 | 0.74 |
| No log | 12.0 | 300 | 0.8243 | 0.71 |
| No log | 13.0 | 325 | 0.5670 | 0.65 |
| No log | 14.0 | 350 | 0.5601 | 0.61 |
| No log | 15.0 | 375 | 0.6452 | 0.64 |
| No log | 16.0 | 400 | 0.5239 | 0.69 |
| No log | 17.0 | 425 | 0.7315 | 0.66 |
| No log | 18.0 | 450 | 0.6651 | 0.67 |
| No log | 19.0 | 475 | 0.9040 | 0.72 |
| 1.3727 | 20.0 | 500 | 0.5786 | 0.73 |
| 1.3727 | 21.0 | 525 | 0.7333 | 0.69 |
| 1.3727 | 22.0 | 550 | 0.7584 | 0.7 |
| 1.3727 | 23.0 | 575 | 0.9901 | 0.71 |
| 1.3727 | 24.0 | 600 | 0.5711 | 0.7 |
| 1.3727 | 25.0 | 625 | 0.5870 | 0.67 |
| 1.3727 | 26.0 | 650 | 0.5832 | 0.7 |
| 1.3727 | 27.0 | 675 | 0.9777 | 0.72 |
| 1.3727 | 28.0 | 700 | 0.6448 | 0.71 |
| 1.3727 | 29.0 | 725 | 0.8739 | 0.71 |
| 1.3727 | 30.0 | 750 | 0.6710 | 0.68 |
| 1.3727 | 31.0 | 775 | 0.5919 | 0.71 |
| 1.3727 | 32.0 | 800 | 0.7616 | 0.7 |
| 1.3727 | 33.0 | 825 | 0.5837 | 0.72 |
| 1.3727 | 34.0 | 850 | 1.0103 | 0.74 |
| 1.3727 | 35.0 | 875 | 0.7008 | 0.73 |
| 1.3727 | 36.0 | 900 | 1.0161 | 0.72 |
| 1.3727 | 37.0 | 925 | 0.6911 | 0.75 |
| 1.3727 | 38.0 | 950 | 0.6451 | 0.75 |
| 1.3727 | 39.0 | 975 | 0.7190 | 0.74 |
| 0.7534 | 40.0 | 1000 | 0.5164 | 0.74 |
| 0.7534 | 41.0 | 1025 | 0.4995 | 0.72 |
| 0.7534 | 42.0 | 1050 | 0.5840 | 0.75 |
| 0.7534 | 43.0 | 1075 | 0.7395 | 0.75 |
| 0.7534 | 44.0 | 1100 | 0.6374 | 0.72 |
| 0.7534 | 45.0 | 1125 | 0.7467 | 0.73 |
| 0.7534 | 46.0 | 1150 | 0.6876 | 0.74 |
| 0.7534 | 47.0 | 1175 | 0.5959 | 0.74 |
| 0.7534 | 48.0 | 1200 | 0.5625 | 0.74 |
| 0.7534 | 49.0 | 1225 | 0.6837 | 0.75 |
| 0.7534 | 50.0 | 1250 | 0.6766 | 0.76 |
| 0.7534 | 51.0 | 1275 | 0.6266 | 0.75 |
| 0.7534 | 52.0 | 1300 | 0.6642 | 0.74 |
| 0.7534 | 53.0 | 1325 | 0.6202 | 0.74 |
| 0.7534 | 54.0 | 1350 | 0.6398 | 0.75 |
| 0.7534 | 55.0 | 1375 | 0.6689 | 0.75 |
| 0.7534 | 56.0 | 1400 | 0.6629 | 0.76 |
| 0.7534 | 57.0 | 1425 | 0.5903 | 0.76 |
| 0.7534 | 58.0 | 1450 | 0.6133 | 0.77 |
| 0.7534 | 59.0 | 1475 | 0.6885 | 0.76 |
| 0.4477 | 60.0 | 1500 | 0.5950 | 0.76 |
| 0.4477 | 61.0 | 1525 | 0.5715 | 0.75 |
| 0.4477 | 62.0 | 1550 | 0.6111 | 0.76 |
| 0.4477 | 63.0 | 1575 | 0.6023 | 0.76 |
| 0.4477 | 64.0 | 1600 | 0.5793 | 0.76 |
| 0.4477 | 65.0 | 1625 | 0.5727 | 0.74 |
| 0.4477 | 66.0 | 1650 | 0.5606 | 0.76 |
| 0.4477 | 67.0 | 1675 | 0.5970 | 0.76 |
| 0.4477 | 68.0 | 1700 | 0.5602 | 0.76 |
| 0.4477 | 69.0 | 1725 | 0.5781 | 0.75 |
| 0.4477 | 70.0 | 1750 | 0.6142 | 0.76 |
| 0.4477 | 71.0 | 1775 | 0.5758 | 0.76 |
| 0.4477 | 72.0 | 1800 | 0.5650 | 0.75 |
| 0.4477 | 73.0 | 1825 | 0.5823 | 0.76 |
| 0.4477 | 74.0 | 1850 | 0.5547 | 0.76 |
| 0.4477 | 75.0 | 1875 | 0.5637 | 0.76 |
| 0.4477 | 76.0 | 1900 | 0.5806 | 0.76 |
| 0.4477 | 77.0 | 1925 | 0.5602 | 0.76 |
| 0.4477 | 78.0 | 1950 | 0.5708 | 0.76 |
| 0.4477 | 79.0 | 1975 | 0.5624 | 0.76 |
| 0.3287 | 80.0 | 2000 | 0.5641 | 0.76 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LarryAIDraw/Rukia_bankai
|
LarryAIDraw
| 2023-08-26T02:40:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-26T02:34:32Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/133874/rukia-kuchiki-bleach-bankai
|
LarryAIDraw/YuisisKnightC1_17
|
LarryAIDraw
| 2023-08-26T02:39:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-26T02:30:30Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/134129/character-yuisismulticostumevers-granblue-fantasy
|
LarryAIDraw/akbreeze-8
|
LarryAIDraw
| 2023-08-26T02:38:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-26T02:31:25Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/134173/breeze-arknights
|
dt-and-vanilla-ardt/ardt-vanilla-combo_train_walker2d_v2-2608_0132-33
|
dt-and-vanilla-ardt
| 2023-08-26T02:28:09Z | 33 | 0 |
transformers
|
[
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T00:34:33Z |
---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-combo_train_walker2d_v2-2608_0132-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-combo_train_walker2d_v2-2608_0132-33
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LarryAIDraw/ChenxingSnowbreakV1_0
|
LarryAIDraw
| 2023-08-26T02:25:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-26T02:14:17Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/134591/chenxing-or-snowbreak-or
|
LarryAIDraw/Olivier
|
LarryAIDraw
| 2023-08-26T02:24:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-26T02:12:48Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/134524/olivier-the-eminence-in-shadow
|
LarryAIDraw/luna_kindred_m8
|
LarryAIDraw
| 2023-08-26T02:24:13Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-26T02:11:45Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/28297/luna-kindred-or-honkai-impact-3rd
|
LarryAIDraw/clemenceau_d8_v2_e6
|
LarryAIDraw
| 2023-08-26T02:23:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-26T02:10:25Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/134714/clemenceau-or-or-azur-lane-lora
|
dkqjrm/20230826093525
|
dkqjrm
| 2023-08-26T01:56:30Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-26T00:35:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826093525'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826093525
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6263
- Accuracy: 0.44
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.8357 | 0.4 |
| No log | 2.0 | 50 | 0.6364 | 0.62 |
| No log | 3.0 | 75 | 0.7513 | 0.62 |
| No log | 4.0 | 100 | 0.5950 | 0.6 |
| No log | 5.0 | 125 | 0.6111 | 0.49 |
| No log | 6.0 | 150 | 0.7314 | 0.59 |
| No log | 7.0 | 175 | 0.6188 | 0.67 |
| No log | 8.0 | 200 | 1.2028 | 0.58 |
| No log | 9.0 | 225 | 0.6303 | 0.71 |
| No log | 10.0 | 250 | 0.8705 | 0.65 |
| No log | 11.0 | 275 | 0.5481 | 0.68 |
| No log | 12.0 | 300 | 0.8700 | 0.7 |
| No log | 13.0 | 325 | 0.7616 | 0.62 |
| No log | 14.0 | 350 | 0.7385 | 0.71 |
| No log | 15.0 | 375 | 0.8501 | 0.55 |
| No log | 16.0 | 400 | 0.6954 | 0.49 |
| No log | 17.0 | 425 | 0.6255 | 0.55 |
| No log | 18.0 | 450 | 0.6264 | 0.38 |
| No log | 19.0 | 475 | 0.6275 | 0.42 |
| 1.5048 | 20.0 | 500 | 0.6259 | 0.61 |
| 1.5048 | 21.0 | 525 | 0.6270 | 0.42 |
| 1.5048 | 22.0 | 550 | 0.6275 | 0.42 |
| 1.5048 | 23.0 | 575 | 0.6249 | 0.59 |
| 1.5048 | 24.0 | 600 | 0.6269 | 0.4 |
| 1.5048 | 25.0 | 625 | 0.6254 | 0.57 |
| 1.5048 | 26.0 | 650 | 0.6265 | 0.45 |
| 1.5048 | 27.0 | 675 | 0.6262 | 0.62 |
| 1.5048 | 28.0 | 700 | 0.6247 | 0.54 |
| 1.5048 | 29.0 | 725 | 0.6241 | 0.59 |
| 1.5048 | 30.0 | 750 | 0.6247 | 0.56 |
| 1.5048 | 31.0 | 775 | 0.6262 | 0.5 |
| 1.5048 | 32.0 | 800 | 0.6261 | 0.6 |
| 1.5048 | 33.0 | 825 | 0.6261 | 0.55 |
| 1.5048 | 34.0 | 850 | 0.6264 | 0.44 |
| 1.5048 | 35.0 | 875 | 0.6266 | 0.43 |
| 1.5048 | 36.0 | 900 | 0.6265 | 0.44 |
| 1.5048 | 37.0 | 925 | 0.6262 | 0.47 |
| 1.5048 | 38.0 | 950 | 0.6264 | 0.48 |
| 1.5048 | 39.0 | 975 | 0.6264 | 0.43 |
| 1.2203 | 40.0 | 1000 | 0.6262 | 0.63 |
| 1.2203 | 41.0 | 1025 | 0.6263 | 0.53 |
| 1.2203 | 42.0 | 1050 | 0.6262 | 0.59 |
| 1.2203 | 43.0 | 1075 | 0.6265 | 0.38 |
| 1.2203 | 44.0 | 1100 | 0.6262 | 0.61 |
| 1.2203 | 45.0 | 1125 | 0.6262 | 0.64 |
| 1.2203 | 46.0 | 1150 | 0.6263 | 0.5 |
| 1.2203 | 47.0 | 1175 | 0.6262 | 0.6 |
| 1.2203 | 48.0 | 1200 | 0.6263 | 0.55 |
| 1.2203 | 49.0 | 1225 | 0.6265 | 0.39 |
| 1.2203 | 50.0 | 1250 | 0.6262 | 0.62 |
| 1.2203 | 51.0 | 1275 | 0.6262 | 0.51 |
| 1.2203 | 52.0 | 1300 | 0.6261 | 0.57 |
| 1.2203 | 53.0 | 1325 | 0.6262 | 0.58 |
| 1.2203 | 54.0 | 1350 | 0.6261 | 0.58 |
| 1.2203 | 55.0 | 1375 | 0.6260 | 0.61 |
| 1.2203 | 56.0 | 1400 | 0.6261 | 0.64 |
| 1.2203 | 57.0 | 1425 | 0.6263 | 0.41 |
| 1.2203 | 58.0 | 1450 | 0.6264 | 0.41 |
| 1.2203 | 59.0 | 1475 | 0.6263 | 0.45 |
| 0.9516 | 60.0 | 1500 | 0.6263 | 0.54 |
| 0.9516 | 61.0 | 1525 | 0.6263 | 0.47 |
| 0.9516 | 62.0 | 1550 | 0.6261 | 0.61 |
| 0.9516 | 63.0 | 1575 | 0.6263 | 0.59 |
| 0.9516 | 64.0 | 1600 | 0.6261 | 0.63 |
| 0.9516 | 65.0 | 1625 | 0.6263 | 0.5 |
| 0.9516 | 66.0 | 1650 | 0.6265 | 0.39 |
| 0.9516 | 67.0 | 1675 | 0.6262 | 0.59 |
| 0.9516 | 68.0 | 1700 | 0.6264 | 0.38 |
| 0.9516 | 69.0 | 1725 | 0.6262 | 0.59 |
| 0.9516 | 70.0 | 1750 | 0.6263 | 0.51 |
| 0.9516 | 71.0 | 1775 | 0.6261 | 0.6 |
| 0.9516 | 72.0 | 1800 | 0.6263 | 0.4 |
| 0.9516 | 73.0 | 1825 | 0.6262 | 0.6 |
| 0.9516 | 74.0 | 1850 | 0.6263 | 0.48 |
| 0.9516 | 75.0 | 1875 | 0.6262 | 0.62 |
| 0.9516 | 76.0 | 1900 | 0.6263 | 0.44 |
| 0.9516 | 77.0 | 1925 | 0.6263 | 0.43 |
| 0.9516 | 78.0 | 1950 | 0.6263 | 0.45 |
| 0.9516 | 79.0 | 1975 | 0.6263 | 0.42 |
| 0.7734 | 80.0 | 2000 | 0.6263 | 0.44 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ad019el/m2m100_418M-finetuned-tq-to-ar-1-2
|
ad019el
| 2023-08-26T01:18:14Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-23T11:43:00Z |
---
base_model: ad019el/m2m100_418M-finetuned-tq-to-ar-only-clean-data
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: m2m100_418M-finetuned-tq-to-ar-1-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# m2m100_418M-finetuned-tq-to-ar-1-2
This model is a fine-tuned version of [ad019el/m2m100_418M-finetuned-tq-to-ar-only-clean-data](https://huggingface.co/ad019el/m2m100_418M-finetuned-tq-to-ar-only-clean-data) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3123
- Bleu: 3.2398
- Gen Len: 39.2562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 2.3145 | 0.68 | 500 | 2.2961 | 3.5196 | 39.7015 |
| 2.2469 | 1.36 | 1000 | 2.2789 | 3.1373 | 42.5945 |
| 2.1915 | 2.05 | 1500 | 2.3092 | 3.3981 | 41.4192 |
| 2.1358 | 2.73 | 2000 | 2.3077 | 3.2268 | 41.8321 |
| 2.0879 | 3.41 | 2500 | 2.3123 | 3.2398 | 39.2562 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dkqjrm/20230826083404
|
dkqjrm
| 2023-08-26T01:04:57Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-25T23:34:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826083404'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826083404
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5588
- Accuracy: 0.56
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.6769 | 0.61 |
| No log | 2.0 | 50 | 0.5349 | 0.59 |
| No log | 3.0 | 75 | 0.6615 | 0.58 |
| No log | 4.0 | 100 | 0.6596 | 0.64 |
| No log | 5.0 | 125 | 0.5523 | 0.71 |
| No log | 6.0 | 150 | 0.8447 | 0.67 |
| No log | 7.0 | 175 | 0.7506 | 0.66 |
| No log | 8.0 | 200 | 0.8463 | 0.68 |
| No log | 9.0 | 225 | 0.9064 | 0.56 |
| No log | 10.0 | 250 | 0.5533 | 0.58 |
| No log | 11.0 | 275 | 0.5701 | 0.41 |
| No log | 12.0 | 300 | 0.5593 | 0.51 |
| No log | 13.0 | 325 | 0.5599 | 0.52 |
| No log | 14.0 | 350 | 0.5619 | 0.37 |
| No log | 15.0 | 375 | 0.5591 | 0.56 |
| No log | 16.0 | 400 | 0.5569 | 0.55 |
| No log | 17.0 | 425 | 0.5511 | 0.56 |
| No log | 18.0 | 450 | 0.5599 | 0.52 |
| No log | 19.0 | 475 | 0.5561 | 0.59 |
| 1.4827 | 20.0 | 500 | 0.5577 | 0.57 |
| 1.4827 | 21.0 | 525 | 0.5537 | 0.58 |
| 1.4827 | 22.0 | 550 | 0.5616 | 0.43 |
| 1.4827 | 23.0 | 575 | 0.5607 | 0.34 |
| 1.4827 | 24.0 | 600 | 0.5616 | 0.39 |
| 1.4827 | 25.0 | 625 | 0.5597 | 0.56 |
| 1.4827 | 26.0 | 650 | 0.5623 | 0.41 |
| 1.4827 | 27.0 | 675 | 0.5612 | 0.43 |
| 1.4827 | 28.0 | 700 | 0.5573 | 0.57 |
| 1.4827 | 29.0 | 725 | 0.5631 | 0.42 |
| 1.4827 | 30.0 | 750 | 0.5594 | 0.51 |
| 1.4827 | 31.0 | 775 | 0.5593 | 0.56 |
| 1.4827 | 32.0 | 800 | 0.5646 | 0.43 |
| 1.4827 | 33.0 | 825 | 0.5664 | 0.44 |
| 1.4827 | 34.0 | 850 | 0.5597 | 0.56 |
| 1.4827 | 35.0 | 875 | 0.5629 | 0.41 |
| 1.4827 | 36.0 | 900 | 0.5610 | 0.43 |
| 1.4827 | 37.0 | 925 | 0.5572 | 0.58 |
| 1.4827 | 38.0 | 950 | 0.5592 | 0.6 |
| 1.4827 | 39.0 | 975 | 0.5553 | 0.59 |
| 1.1505 | 40.0 | 1000 | 0.5597 | 0.58 |
| 1.1505 | 41.0 | 1025 | 0.5570 | 0.62 |
| 1.1505 | 42.0 | 1050 | 0.5582 | 0.6 |
| 1.1505 | 43.0 | 1075 | 0.5601 | 0.46 |
| 1.1505 | 44.0 | 1100 | 0.5598 | 0.55 |
| 1.1505 | 45.0 | 1125 | 0.5574 | 0.59 |
| 1.1505 | 46.0 | 1150 | 0.5591 | 0.52 |
| 1.1505 | 47.0 | 1175 | 0.5601 | 0.5 |
| 1.1505 | 48.0 | 1200 | 0.5593 | 0.56 |
| 1.1505 | 49.0 | 1225 | 0.5600 | 0.48 |
| 1.1505 | 50.0 | 1250 | 0.5620 | 0.39 |
| 1.1505 | 51.0 | 1275 | 0.5598 | 0.51 |
| 1.1505 | 52.0 | 1300 | 0.5616 | 0.39 |
| 1.1505 | 53.0 | 1325 | 0.5601 | 0.43 |
| 1.1505 | 54.0 | 1350 | 0.5617 | 0.4 |
| 1.1505 | 55.0 | 1375 | 0.5619 | 0.41 |
| 1.1505 | 56.0 | 1400 | 0.5625 | 0.39 |
| 1.1505 | 57.0 | 1425 | 0.5591 | 0.56 |
| 1.1505 | 58.0 | 1450 | 0.5588 | 0.59 |
| 1.1505 | 59.0 | 1475 | 0.5580 | 0.59 |
| 0.9071 | 60.0 | 1500 | 0.5584 | 0.62 |
| 0.9071 | 61.0 | 1525 | 0.5590 | 0.58 |
| 0.9071 | 62.0 | 1550 | 0.5585 | 0.57 |
| 0.9071 | 63.0 | 1575 | 0.5586 | 0.59 |
| 0.9071 | 64.0 | 1600 | 0.5589 | 0.57 |
| 0.9071 | 65.0 | 1625 | 0.5587 | 0.59 |
| 0.9071 | 66.0 | 1650 | 0.5588 | 0.61 |
| 0.9071 | 67.0 | 1675 | 0.5592 | 0.57 |
| 0.9071 | 68.0 | 1700 | 0.5579 | 0.58 |
| 0.9071 | 69.0 | 1725 | 0.5586 | 0.56 |
| 0.9071 | 70.0 | 1750 | 0.5590 | 0.57 |
| 0.9071 | 71.0 | 1775 | 0.5590 | 0.57 |
| 0.9071 | 72.0 | 1800 | 0.5590 | 0.59 |
| 0.9071 | 73.0 | 1825 | 0.5591 | 0.56 |
| 0.9071 | 74.0 | 1850 | 0.5586 | 0.56 |
| 0.9071 | 75.0 | 1875 | 0.5590 | 0.56 |
| 0.9071 | 76.0 | 1900 | 0.5592 | 0.57 |
| 0.9071 | 77.0 | 1925 | 0.5587 | 0.53 |
| 0.9071 | 78.0 | 1950 | 0.5588 | 0.56 |
| 0.9071 | 79.0 | 1975 | 0.5589 | 0.58 |
| 0.7248 | 80.0 | 2000 | 0.5588 | 0.56 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_syl_cv12_pad_lob100_low__0190
|
bigmorning
| 2023-08-26T00:43:58Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-26T00:43:50Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100_low__0190
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100_low__0190
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0004
- Train Accuracy: 0.0362
- Train Wermet: 0.0035
- Validation Loss: 0.7719
- Validation Accuracy: 0.0237
- Validation Wermet: 0.2214
- Epoch: 189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.2930 | 0.0113 | 2.0658 | 3.9415 | 0.0117 | 0.9401 | 0 |
| 4.6215 | 0.0121 | 0.8917 | 3.7803 | 0.0120 | 0.9294 | 1 |
| 4.4086 | 0.0128 | 0.8403 | 3.6070 | 0.0124 | 0.9223 | 2 |
| 4.1842 | 0.0135 | 0.8337 | 3.4291 | 0.0128 | 0.8867 | 3 |
| 3.9981 | 0.0141 | 0.8182 | 3.3251 | 0.0131 | 0.8750 | 4 |
| 3.8531 | 0.0145 | 0.8058 | 3.2385 | 0.0133 | 0.8699 | 5 |
| 3.7345 | 0.0149 | 0.7925 | 3.1751 | 0.0134 | 0.8665 | 6 |
| 3.6307 | 0.0152 | 0.7851 | 3.1031 | 0.0136 | 0.8507 | 7 |
| 3.5437 | 0.0155 | 0.7717 | 3.0752 | 0.0138 | 0.8286 | 8 |
| 3.4649 | 0.0157 | 0.7651 | 3.0334 | 0.0139 | 0.8417 | 9 |
| 3.3926 | 0.0159 | 0.7531 | 3.0022 | 0.0139 | 0.8413 | 10 |
| 3.3262 | 0.0162 | 0.7462 | 2.9669 | 0.0140 | 0.8264 | 11 |
| 3.2625 | 0.0164 | 0.7367 | 2.9342 | 0.0141 | 0.8520 | 12 |
| 3.1979 | 0.0166 | 0.7231 | 2.9046 | 0.0144 | 0.8196 | 13 |
| 3.1319 | 0.0169 | 0.7133 | 2.8607 | 0.0145 | 0.8026 | 14 |
| 3.0616 | 0.0172 | 0.7007 | 2.8165 | 0.0146 | 0.7788 | 15 |
| 2.9792 | 0.0176 | 0.6816 | 2.7552 | 0.0149 | 0.7643 | 16 |
| 2.8905 | 0.0180 | 0.6641 | 2.6788 | 0.0151 | 0.7473 | 17 |
| 2.7749 | 0.0186 | 0.6424 | 2.5824 | 0.0155 | 0.7241 | 18 |
| 2.6263 | 0.0193 | 0.6159 | 2.4206 | 0.0161 | 0.7047 | 19 |
| 2.4352 | 0.0203 | 0.5829 | 2.2230 | 0.0168 | 0.6500 | 20 |
| 2.1941 | 0.0216 | 0.5411 | 2.0349 | 0.0175 | 0.5980 | 21 |
| 1.9184 | 0.0231 | 0.4922 | 1.7850 | 0.0184 | 0.5659 | 22 |
| 1.6174 | 0.0249 | 0.4371 | 1.5664 | 0.0192 | 0.5081 | 23 |
| 1.3542 | 0.0265 | 0.3851 | 1.3992 | 0.0199 | 0.4690 | 24 |
| 1.1499 | 0.0278 | 0.3408 | 1.2512 | 0.0205 | 0.4299 | 25 |
| 0.9878 | 0.0288 | 0.3029 | 1.1479 | 0.0209 | 0.4013 | 26 |
| 0.8600 | 0.0297 | 0.2735 | 1.0527 | 0.0213 | 0.3755 | 27 |
| 0.7516 | 0.0305 | 0.2441 | 0.9803 | 0.0216 | 0.3570 | 28 |
| 0.6626 | 0.0311 | 0.2197 | 0.9314 | 0.0219 | 0.3416 | 29 |
| 0.5863 | 0.0316 | 0.1993 | 0.8730 | 0.0221 | 0.3238 | 30 |
| 0.5187 | 0.0321 | 0.1775 | 0.8357 | 0.0223 | 0.3136 | 31 |
| 0.4608 | 0.0326 | 0.1610 | 0.8059 | 0.0224 | 0.3033 | 32 |
| 0.4087 | 0.0330 | 0.1467 | 0.7746 | 0.0226 | 0.2949 | 33 |
| 0.3642 | 0.0334 | 0.1298 | 0.7476 | 0.0227 | 0.2847 | 34 |
| 0.3221 | 0.0337 | 0.1168 | 0.7330 | 0.0228 | 0.2802 | 35 |
| 0.2837 | 0.0340 | 0.1030 | 0.7093 | 0.0229 | 0.2728 | 36 |
| 0.2509 | 0.0343 | 0.0882 | 0.6941 | 0.0229 | 0.2687 | 37 |
| 0.2209 | 0.0346 | 0.0747 | 0.6892 | 0.0230 | 0.2656 | 38 |
| 0.1934 | 0.0349 | 0.0670 | 0.6824 | 0.0230 | 0.2630 | 39 |
| 0.1688 | 0.0351 | 0.0542 | 0.6773 | 0.0230 | 0.2625 | 40 |
| 0.1469 | 0.0353 | 0.0429 | 0.6700 | 0.0231 | 0.2633 | 41 |
| 0.1268 | 0.0355 | 0.0365 | 0.6680 | 0.0231 | 0.2578 | 42 |
| 0.1086 | 0.0357 | 0.0284 | 0.6643 | 0.0231 | 0.2540 | 43 |
| 0.0920 | 0.0358 | 0.0221 | 0.6645 | 0.0231 | 0.2530 | 44 |
| 0.0783 | 0.0359 | 0.0169 | 0.6621 | 0.0232 | 0.2540 | 45 |
| 0.0667 | 0.0360 | 0.0121 | 0.6714 | 0.0232 | 0.2532 | 46 |
| 0.0563 | 0.0361 | 0.0094 | 0.6604 | 0.0232 | 0.2503 | 47 |
| 0.0477 | 0.0361 | 0.0072 | 0.6620 | 0.0232 | 0.2489 | 48 |
| 0.0397 | 0.0362 | 0.0055 | 0.6611 | 0.0232 | 0.2502 | 49 |
| 0.0330 | 0.0362 | 0.0045 | 0.6686 | 0.0232 | 0.2496 | 50 |
| 0.0283 | 0.0362 | 0.0033 | 0.6705 | 0.0232 | 0.2503 | 51 |
| 0.0242 | 0.0362 | 0.0034 | 0.6686 | 0.0232 | 0.2486 | 52 |
| 0.0212 | 0.0362 | 0.0031 | 0.6686 | 0.0232 | 0.2493 | 53 |
| 0.0197 | 0.0362 | 0.0028 | 0.6688 | 0.0232 | 0.2530 | 54 |
| 0.0226 | 0.0362 | 0.0041 | 0.6598 | 0.0233 | 0.2451 | 55 |
| 0.0158 | 0.0362 | 0.0024 | 0.6605 | 0.0233 | 0.2428 | 56 |
| 0.0115 | 0.0362 | 0.0018 | 0.6648 | 0.0233 | 0.2435 | 57 |
| 0.0094 | 0.0362 | 0.0017 | 0.6672 | 0.0233 | 0.2446 | 58 |
| 0.0081 | 0.0362 | 0.0018 | 0.6731 | 0.0233 | 0.2439 | 59 |
| 0.0071 | 0.0362 | 0.0017 | 0.6762 | 0.0233 | 0.2429 | 60 |
| 0.0062 | 0.0362 | 0.0017 | 0.6794 | 0.0233 | 0.2426 | 61 |
| 0.0055 | 0.0362 | 0.0017 | 0.6825 | 0.0233 | 0.2429 | 62 |
| 0.0048 | 0.0362 | 0.0017 | 0.6895 | 0.0233 | 0.2450 | 63 |
| 0.0042 | 0.0362 | 0.0019 | 0.6914 | 0.0233 | 0.2424 | 64 |
| 0.0037 | 0.0362 | 0.0018 | 0.6938 | 0.0233 | 0.2423 | 65 |
| 0.0224 | 0.0361 | 0.0080 | 0.6695 | 0.0234 | 0.2409 | 66 |
| 0.0127 | 0.0362 | 0.0037 | 0.6685 | 0.0234 | 0.2383 | 67 |
| 0.0065 | 0.0362 | 0.0017 | 0.6714 | 0.0234 | 0.2359 | 68 |
| 0.0045 | 0.0362 | 0.0017 | 0.6645 | 0.0234 | 0.2347 | 69 |
| 0.0034 | 0.0362 | 0.0016 | 0.6671 | 0.0234 | 0.2353 | 70 |
| 0.0028 | 0.0362 | 0.0014 | 0.6715 | 0.0234 | 0.2354 | 71 |
| 0.0024 | 0.0362 | 0.0014 | 0.6745 | 0.0234 | 0.2358 | 72 |
| 0.0022 | 0.0362 | 0.0014 | 0.6778 | 0.0234 | 0.2356 | 73 |
| 0.0020 | 0.0362 | 0.0013 | 0.6797 | 0.0234 | 0.2357 | 74 |
| 0.0018 | 0.0362 | 0.0014 | 0.6833 | 0.0234 | 0.2355 | 75 |
| 0.0016 | 0.0362 | 0.0013 | 0.6885 | 0.0234 | 0.2363 | 76 |
| 0.0068 | 0.0362 | 0.0035 | 0.7270 | 0.0232 | 0.2500 | 77 |
| 0.0131 | 0.0362 | 0.0076 | 0.6965 | 0.0234 | 0.2397 | 78 |
| 0.0054 | 0.0362 | 0.0088 | 0.6764 | 0.0235 | 0.2339 | 79 |
| 0.0029 | 0.0362 | 0.0041 | 0.6806 | 0.0235 | 0.2334 | 80 |
| 0.0019 | 0.0362 | 0.0039 | 0.6723 | 0.0235 | 0.2316 | 81 |
| 0.0016 | 0.0362 | 0.0028 | 0.6765 | 0.0235 | 0.2315 | 82 |
| 0.0014 | 0.0362 | 0.0025 | 0.6786 | 0.0235 | 0.2306 | 83 |
| 0.0013 | 0.0362 | 0.0023 | 0.6805 | 0.0235 | 0.2304 | 84 |
| 0.0012 | 0.0362 | 0.0022 | 0.6830 | 0.0235 | 0.2301 | 85 |
| 0.0011 | 0.0362 | 0.0022 | 0.6881 | 0.0235 | 0.2308 | 86 |
| 0.0010 | 0.0362 | 0.0022 | 0.6875 | 0.0235 | 0.2303 | 87 |
| 0.0009 | 0.0362 | 0.0022 | 0.6909 | 0.0235 | 0.2307 | 88 |
| 0.0008 | 0.0362 | 0.0020 | 0.6934 | 0.0235 | 0.2299 | 89 |
| 0.0007 | 0.0362 | 0.0022 | 0.6968 | 0.0235 | 0.2307 | 90 |
| 0.0007 | 0.0362 | 0.0020 | 0.7005 | 0.0235 | 0.2300 | 91 |
| 0.0006 | 0.0362 | 0.0021 | 0.7040 | 0.0235 | 0.2307 | 92 |
| 0.0006 | 0.0362 | 0.0020 | 0.7086 | 0.0235 | 0.2309 | 93 |
| 0.0005 | 0.0362 | 0.0020 | 0.7116 | 0.0235 | 0.2318 | 94 |
| 0.0005 | 0.0362 | 0.0018 | 0.7151 | 0.0235 | 0.2305 | 95 |
| 0.0111 | 0.0362 | 0.2014 | 0.7185 | 0.0234 | 0.2861 | 96 |
| 0.0069 | 0.0362 | 0.0051 | 0.7036 | 0.0235 | 0.2337 | 97 |
| 0.0028 | 0.0362 | 0.0015 | 0.6946 | 0.0235 | 0.2324 | 98 |
| 0.0023 | 0.0362 | 0.0018 | 0.6937 | 0.0235 | 0.2295 | 99 |
| 0.0017 | 0.0362 | 0.0013 | 0.6886 | 0.0235 | 0.2283 | 100 |
| 0.0010 | 0.0362 | 0.0008 | 0.6891 | 0.0236 | 0.2274 | 101 |
| 0.0009 | 0.0362 | 0.0013 | 0.6901 | 0.0236 | 0.2275 | 102 |
| 0.0008 | 0.0362 | 0.0015 | 0.6922 | 0.0236 | 0.2273 | 103 |
| 0.0006 | 0.0362 | 0.0015 | 0.6923 | 0.0236 | 0.2274 | 104 |
| 0.0008 | 0.0362 | 0.0014 | 0.6996 | 0.0235 | 0.2288 | 105 |
| 0.0006 | 0.0362 | 0.0014 | 0.6967 | 0.0236 | 0.2266 | 106 |
| 0.0005 | 0.0362 | 0.0013 | 0.6988 | 0.0236 | 0.2260 | 107 |
| 0.0004 | 0.0362 | 0.0027 | 0.7008 | 0.0236 | 0.2278 | 108 |
| 0.0004 | 0.0362 | 0.0017 | 0.7034 | 0.0236 | 0.2261 | 109 |
| 0.0004 | 0.0362 | 0.0018 | 0.7036 | 0.0236 | 0.2265 | 110 |
| 0.0004 | 0.0362 | 0.0015 | 0.7090 | 0.0236 | 0.2255 | 111 |
| 0.0112 | 0.0362 | 0.0059 | 0.7014 | 0.0235 | 0.2271 | 112 |
| 0.0034 | 0.0362 | 0.0023 | 0.6869 | 0.0236 | 0.2252 | 113 |
| 0.0015 | 0.0362 | 0.0015 | 0.6863 | 0.0236 | 0.2234 | 114 |
| 0.0008 | 0.0362 | 0.0010 | 0.6893 | 0.0236 | 0.2227 | 115 |
| 0.0006 | 0.0362 | 0.0011 | 0.6911 | 0.0236 | 0.2232 | 116 |
| 0.0005 | 0.0362 | 0.0009 | 0.6923 | 0.0236 | 0.2227 | 117 |
| 0.0004 | 0.0362 | 0.0009 | 0.6938 | 0.0236 | 0.2225 | 118 |
| 0.0004 | 0.0362 | 0.0010 | 0.6958 | 0.0236 | 0.2226 | 119 |
| 0.0003 | 0.0362 | 0.0010 | 0.6966 | 0.0236 | 0.2226 | 120 |
| 0.0003 | 0.0362 | 0.0010 | 0.6983 | 0.0236 | 0.2230 | 121 |
| 0.0003 | 0.0362 | 0.0010 | 0.7005 | 0.0236 | 0.2229 | 122 |
| 0.0003 | 0.0362 | 0.0010 | 0.7022 | 0.0236 | 0.2233 | 123 |
| 0.0002 | 0.0362 | 0.0010 | 0.7041 | 0.0236 | 0.2226 | 124 |
| 0.0002 | 0.0362 | 0.0011 | 0.7065 | 0.0236 | 0.2228 | 125 |
| 0.0002 | 0.0362 | 0.0011 | 0.7081 | 0.0236 | 0.2227 | 126 |
| 0.0002 | 0.0362 | 0.0011 | 0.7101 | 0.0236 | 0.2224 | 127 |
| 0.0002 | 0.0362 | 0.0011 | 0.7130 | 0.0236 | 0.2224 | 128 |
| 0.0002 | 0.0362 | 0.0011 | 0.7157 | 0.0236 | 0.2229 | 129 |
| 0.0002 | 0.0362 | 0.0011 | 0.7183 | 0.0236 | 0.2225 | 130 |
| 0.0001 | 0.0362 | 0.0011 | 0.7212 | 0.0236 | 0.2230 | 131 |
| 0.0001 | 0.0362 | 0.0012 | 0.7250 | 0.0236 | 0.2230 | 132 |
| 0.0001 | 0.0362 | 0.0012 | 0.7268 | 0.0236 | 0.2229 | 133 |
| 0.0001 | 0.0362 | 0.0011 | 0.7303 | 0.0236 | 0.2229 | 134 |
| 0.0001 | 0.0362 | 0.0012 | 0.7350 | 0.0236 | 0.2236 | 135 |
| 0.0001 | 0.0362 | 0.0012 | 0.7386 | 0.0236 | 0.2240 | 136 |
| 0.0001 | 0.0362 | 0.0012 | 0.7422 | 0.0236 | 0.2231 | 137 |
| 0.0001 | 0.0362 | 0.0013 | 0.7445 | 0.0236 | 0.2236 | 138 |
| 0.0001 | 0.0362 | 0.0012 | 0.7500 | 0.0236 | 0.2243 | 139 |
| 0.0112 | 0.0361 | 0.0117 | 0.7391 | 0.0235 | 0.2370 | 140 |
| 0.0036 | 0.0362 | 0.0041 | 0.7201 | 0.0236 | 0.2277 | 141 |
| 0.0011 | 0.0362 | 0.0032 | 0.7210 | 0.0236 | 0.2243 | 142 |
| 0.0006 | 0.0362 | 0.0030 | 0.7199 | 0.0236 | 0.2269 | 143 |
| 0.0003 | 0.0362 | 0.0019 | 0.7231 | 0.0236 | 0.2254 | 144 |
| 0.0002 | 0.0362 | 0.0021 | 0.7179 | 0.0236 | 0.2228 | 145 |
| 0.0002 | 0.0362 | 0.0020 | 0.7236 | 0.0236 | 0.2234 | 146 |
| 0.0002 | 0.0362 | 0.0021 | 0.7271 | 0.0236 | 0.2254 | 147 |
| 0.0002 | 0.0362 | 0.0022 | 0.7250 | 0.0236 | 0.2233 | 148 |
| 0.0001 | 0.0362 | 0.0021 | 0.7255 | 0.0236 | 0.2230 | 149 |
| 0.0001 | 0.0362 | 0.0020 | 0.7263 | 0.0236 | 0.2228 | 150 |
| 0.0001 | 0.0362 | 0.0021 | 0.7278 | 0.0236 | 0.2226 | 151 |
| 0.0001 | 0.0362 | 0.0021 | 0.7289 | 0.0237 | 0.2220 | 152 |
| 0.0001 | 0.0362 | 0.0020 | 0.7301 | 0.0237 | 0.2214 | 153 |
| 0.0001 | 0.0362 | 0.0020 | 0.7307 | 0.0237 | 0.2216 | 154 |
| 0.0001 | 0.0362 | 0.0020 | 0.7329 | 0.0237 | 0.2217 | 155 |
| 0.0001 | 0.0362 | 0.0020 | 0.7339 | 0.0237 | 0.2211 | 156 |
| 0.0001 | 0.0362 | 0.0020 | 0.7354 | 0.0237 | 0.2210 | 157 |
| 0.0001 | 0.0362 | 0.0020 | 0.7374 | 0.0237 | 0.2207 | 158 |
| 0.0001 | 0.0362 | 0.0020 | 0.7394 | 0.0237 | 0.2211 | 159 |
| 0.0001 | 0.0362 | 0.0020 | 0.7406 | 0.0237 | 0.2212 | 160 |
| 0.0001 | 0.0362 | 0.0021 | 0.7422 | 0.0237 | 0.2213 | 161 |
| 0.0001 | 0.0362 | 0.0020 | 0.7446 | 0.0237 | 0.2207 | 162 |
| 0.0001 | 0.0362 | 0.0020 | 0.7471 | 0.0237 | 0.2209 | 163 |
| 0.0000 | 0.0362 | 0.0020 | 0.7502 | 0.0237 | 0.2206 | 164 |
| 0.0000 | 0.0362 | 0.0021 | 0.7518 | 0.0237 | 0.2210 | 165 |
| 0.0000 | 0.0362 | 0.0021 | 0.7533 | 0.0237 | 0.2207 | 166 |
| 0.0000 | 0.0362 | 0.0021 | 0.7566 | 0.0237 | 0.2204 | 167 |
| 0.0000 | 0.0362 | 0.0021 | 0.7590 | 0.0237 | 0.2203 | 168 |
| 0.0000 | 0.0362 | 0.0022 | 0.7617 | 0.0237 | 0.2208 | 169 |
| 0.0000 | 0.0362 | 0.0022 | 0.7644 | 0.0237 | 0.2207 | 170 |
| 0.0000 | 0.0362 | 0.0022 | 0.7685 | 0.0237 | 0.2206 | 171 |
| 0.0000 | 0.0362 | 0.0022 | 0.7710 | 0.0237 | 0.2203 | 172 |
| 0.0000 | 0.0362 | 0.0022 | 0.7757 | 0.0236 | 0.2212 | 173 |
| 0.0000 | 0.0362 | 0.0023 | 0.7803 | 0.0236 | 0.2214 | 174 |
| 0.0000 | 0.0362 | 0.0024 | 0.7834 | 0.0236 | 0.2210 | 175 |
| 0.0000 | 0.0362 | 0.0024 | 0.7863 | 0.0237 | 0.2209 | 176 |
| 0.0000 | 0.0362 | 0.0024 | 0.7909 | 0.0236 | 0.2214 | 177 |
| 0.0000 | 0.0362 | 0.0024 | 0.7940 | 0.0237 | 0.2208 | 178 |
| 0.0000 | 0.0362 | 0.0025 | 0.7999 | 0.0236 | 0.2214 | 179 |
| 0.0000 | 0.0362 | 0.0025 | 0.8032 | 0.0236 | 0.2212 | 180 |
| 0.0000 | 0.0362 | 0.0025 | 0.8074 | 0.0236 | 0.2215 | 181 |
| 0.0000 | 0.0362 | 0.0027 | 0.8113 | 0.0236 | 0.2211 | 182 |
| 0.0000 | 0.0362 | 0.0027 | 0.8145 | 0.0236 | 0.2217 | 183 |
| 0.0000 | 0.0362 | 0.0028 | 0.8198 | 0.0236 | 0.2216 | 184 |
| 0.0080 | 0.0362 | 0.0076 | 0.8088 | 0.0235 | 0.2315 | 185 |
| 0.0063 | 0.0362 | 0.0071 | 0.8072 | 0.0235 | 0.2340 | 186 |
| 0.0022 | 0.0362 | 0.0032 | 0.7840 | 0.0236 | 0.2280 | 187 |
| 0.0007 | 0.0362 | 0.0029 | 0.7713 | 0.0236 | 0.2271 | 188 |
| 0.0004 | 0.0362 | 0.0035 | 0.7719 | 0.0237 | 0.2214 | 189 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
dkqjrm/20230826073557
|
dkqjrm
| 2023-08-26T00:20:36Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-25T22:36:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826073557'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826073557
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4014
- Accuracy: 0.72
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.4958 | 0.46 |
| No log | 2.0 | 50 | 0.5956 | 0.54 |
| No log | 3.0 | 75 | 0.5377 | 0.45 |
| No log | 4.0 | 100 | 0.4202 | 0.61 |
| No log | 5.0 | 125 | 0.4367 | 0.44 |
| No log | 6.0 | 150 | 0.4370 | 0.51 |
| No log | 7.0 | 175 | 0.4207 | 0.66 |
| No log | 8.0 | 200 | 0.4423 | 0.58 |
| No log | 9.0 | 225 | 0.4107 | 0.61 |
| No log | 10.0 | 250 | 0.4332 | 0.64 |
| No log | 11.0 | 275 | 0.4055 | 0.6 |
| No log | 12.0 | 300 | 0.4376 | 0.63 |
| No log | 13.0 | 325 | 0.4062 | 0.57 |
| No log | 14.0 | 350 | 0.4000 | 0.61 |
| No log | 15.0 | 375 | 0.4052 | 0.63 |
| No log | 16.0 | 400 | 0.3961 | 0.68 |
| No log | 17.0 | 425 | 0.3976 | 0.67 |
| No log | 18.0 | 450 | 0.4186 | 0.65 |
| No log | 19.0 | 475 | 0.4304 | 0.63 |
| 0.731 | 20.0 | 500 | 0.4358 | 0.69 |
| 0.731 | 21.0 | 525 | 0.4135 | 0.68 |
| 0.731 | 22.0 | 550 | 0.4180 | 0.68 |
| 0.731 | 23.0 | 575 | 0.4627 | 0.66 |
| 0.731 | 24.0 | 600 | 0.4150 | 0.65 |
| 0.731 | 25.0 | 625 | 0.4005 | 0.67 |
| 0.731 | 26.0 | 650 | 0.4123 | 0.7 |
| 0.731 | 27.0 | 675 | 0.4342 | 0.69 |
| 0.731 | 28.0 | 700 | 0.4551 | 0.67 |
| 0.731 | 29.0 | 725 | 0.4222 | 0.69 |
| 0.731 | 30.0 | 750 | 0.4226 | 0.71 |
| 0.731 | 31.0 | 775 | 0.4702 | 0.69 |
| 0.731 | 32.0 | 800 | 0.4100 | 0.7 |
| 0.731 | 33.0 | 825 | 0.4318 | 0.69 |
| 0.731 | 34.0 | 850 | 0.4447 | 0.71 |
| 0.731 | 35.0 | 875 | 0.3881 | 0.72 |
| 0.731 | 36.0 | 900 | 0.4234 | 0.69 |
| 0.731 | 37.0 | 925 | 0.4869 | 0.69 |
| 0.731 | 38.0 | 950 | 0.4352 | 0.71 |
| 0.731 | 39.0 | 975 | 0.4465 | 0.71 |
| 0.5086 | 40.0 | 1000 | 0.4135 | 0.7 |
| 0.5086 | 41.0 | 1025 | 0.4061 | 0.7 |
| 0.5086 | 42.0 | 1050 | 0.4437 | 0.72 |
| 0.5086 | 43.0 | 1075 | 0.4461 | 0.72 |
| 0.5086 | 44.0 | 1100 | 0.4144 | 0.69 |
| 0.5086 | 45.0 | 1125 | 0.3973 | 0.71 |
| 0.5086 | 46.0 | 1150 | 0.4511 | 0.73 |
| 0.5086 | 47.0 | 1175 | 0.4273 | 0.71 |
| 0.5086 | 48.0 | 1200 | 0.4100 | 0.71 |
| 0.5086 | 49.0 | 1225 | 0.4209 | 0.72 |
| 0.5086 | 50.0 | 1250 | 0.4191 | 0.74 |
| 0.5086 | 51.0 | 1275 | 0.4023 | 0.74 |
| 0.5086 | 52.0 | 1300 | 0.4038 | 0.72 |
| 0.5086 | 53.0 | 1325 | 0.4148 | 0.73 |
| 0.5086 | 54.0 | 1350 | 0.4263 | 0.72 |
| 0.5086 | 55.0 | 1375 | 0.4331 | 0.73 |
| 0.5086 | 56.0 | 1400 | 0.4373 | 0.71 |
| 0.5086 | 57.0 | 1425 | 0.4081 | 0.72 |
| 0.5086 | 58.0 | 1450 | 0.4078 | 0.71 |
| 0.5086 | 59.0 | 1475 | 0.4250 | 0.72 |
| 0.4268 | 60.0 | 1500 | 0.4224 | 0.7 |
| 0.4268 | 61.0 | 1525 | 0.4255 | 0.7 |
| 0.4268 | 62.0 | 1550 | 0.4114 | 0.72 |
| 0.4268 | 63.0 | 1575 | 0.4266 | 0.72 |
| 0.4268 | 64.0 | 1600 | 0.4097 | 0.72 |
| 0.4268 | 65.0 | 1625 | 0.4053 | 0.72 |
| 0.4268 | 66.0 | 1650 | 0.4051 | 0.71 |
| 0.4268 | 67.0 | 1675 | 0.4135 | 0.73 |
| 0.4268 | 68.0 | 1700 | 0.3959 | 0.74 |
| 0.4268 | 69.0 | 1725 | 0.4162 | 0.72 |
| 0.4268 | 70.0 | 1750 | 0.4061 | 0.73 |
| 0.4268 | 71.0 | 1775 | 0.4016 | 0.71 |
| 0.4268 | 72.0 | 1800 | 0.4194 | 0.71 |
| 0.4268 | 73.0 | 1825 | 0.4098 | 0.72 |
| 0.4268 | 74.0 | 1850 | 0.4179 | 0.71 |
| 0.4268 | 75.0 | 1875 | 0.4105 | 0.71 |
| 0.4268 | 76.0 | 1900 | 0.4140 | 0.72 |
| 0.4268 | 77.0 | 1925 | 0.4081 | 0.73 |
| 0.4268 | 78.0 | 1950 | 0.4044 | 0.73 |
| 0.4268 | 79.0 | 1975 | 0.3996 | 0.72 |
| 0.3915 | 80.0 | 2000 | 0.4014 | 0.72 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bigmorning/whisper_syl_cv12_pad_lob100_low__0180
|
bigmorning
| 2023-08-26T00:17:43Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-26T00:17:35Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100_low__0180
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100_low__0180
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0362
- Train Wermet: 0.0025
- Validation Loss: 0.7999
- Validation Accuracy: 0.0236
- Validation Wermet: 0.2214
- Epoch: 179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.2930 | 0.0113 | 2.0658 | 3.9415 | 0.0117 | 0.9401 | 0 |
| 4.6215 | 0.0121 | 0.8917 | 3.7803 | 0.0120 | 0.9294 | 1 |
| 4.4086 | 0.0128 | 0.8403 | 3.6070 | 0.0124 | 0.9223 | 2 |
| 4.1842 | 0.0135 | 0.8337 | 3.4291 | 0.0128 | 0.8867 | 3 |
| 3.9981 | 0.0141 | 0.8182 | 3.3251 | 0.0131 | 0.8750 | 4 |
| 3.8531 | 0.0145 | 0.8058 | 3.2385 | 0.0133 | 0.8699 | 5 |
| 3.7345 | 0.0149 | 0.7925 | 3.1751 | 0.0134 | 0.8665 | 6 |
| 3.6307 | 0.0152 | 0.7851 | 3.1031 | 0.0136 | 0.8507 | 7 |
| 3.5437 | 0.0155 | 0.7717 | 3.0752 | 0.0138 | 0.8286 | 8 |
| 3.4649 | 0.0157 | 0.7651 | 3.0334 | 0.0139 | 0.8417 | 9 |
| 3.3926 | 0.0159 | 0.7531 | 3.0022 | 0.0139 | 0.8413 | 10 |
| 3.3262 | 0.0162 | 0.7462 | 2.9669 | 0.0140 | 0.8264 | 11 |
| 3.2625 | 0.0164 | 0.7367 | 2.9342 | 0.0141 | 0.8520 | 12 |
| 3.1979 | 0.0166 | 0.7231 | 2.9046 | 0.0144 | 0.8196 | 13 |
| 3.1319 | 0.0169 | 0.7133 | 2.8607 | 0.0145 | 0.8026 | 14 |
| 3.0616 | 0.0172 | 0.7007 | 2.8165 | 0.0146 | 0.7788 | 15 |
| 2.9792 | 0.0176 | 0.6816 | 2.7552 | 0.0149 | 0.7643 | 16 |
| 2.8905 | 0.0180 | 0.6641 | 2.6788 | 0.0151 | 0.7473 | 17 |
| 2.7749 | 0.0186 | 0.6424 | 2.5824 | 0.0155 | 0.7241 | 18 |
| 2.6263 | 0.0193 | 0.6159 | 2.4206 | 0.0161 | 0.7047 | 19 |
| 2.4352 | 0.0203 | 0.5829 | 2.2230 | 0.0168 | 0.6500 | 20 |
| 2.1941 | 0.0216 | 0.5411 | 2.0349 | 0.0175 | 0.5980 | 21 |
| 1.9184 | 0.0231 | 0.4922 | 1.7850 | 0.0184 | 0.5659 | 22 |
| 1.6174 | 0.0249 | 0.4371 | 1.5664 | 0.0192 | 0.5081 | 23 |
| 1.3542 | 0.0265 | 0.3851 | 1.3992 | 0.0199 | 0.4690 | 24 |
| 1.1499 | 0.0278 | 0.3408 | 1.2512 | 0.0205 | 0.4299 | 25 |
| 0.9878 | 0.0288 | 0.3029 | 1.1479 | 0.0209 | 0.4013 | 26 |
| 0.8600 | 0.0297 | 0.2735 | 1.0527 | 0.0213 | 0.3755 | 27 |
| 0.7516 | 0.0305 | 0.2441 | 0.9803 | 0.0216 | 0.3570 | 28 |
| 0.6626 | 0.0311 | 0.2197 | 0.9314 | 0.0219 | 0.3416 | 29 |
| 0.5863 | 0.0316 | 0.1993 | 0.8730 | 0.0221 | 0.3238 | 30 |
| 0.5187 | 0.0321 | 0.1775 | 0.8357 | 0.0223 | 0.3136 | 31 |
| 0.4608 | 0.0326 | 0.1610 | 0.8059 | 0.0224 | 0.3033 | 32 |
| 0.4087 | 0.0330 | 0.1467 | 0.7746 | 0.0226 | 0.2949 | 33 |
| 0.3642 | 0.0334 | 0.1298 | 0.7476 | 0.0227 | 0.2847 | 34 |
| 0.3221 | 0.0337 | 0.1168 | 0.7330 | 0.0228 | 0.2802 | 35 |
| 0.2837 | 0.0340 | 0.1030 | 0.7093 | 0.0229 | 0.2728 | 36 |
| 0.2509 | 0.0343 | 0.0882 | 0.6941 | 0.0229 | 0.2687 | 37 |
| 0.2209 | 0.0346 | 0.0747 | 0.6892 | 0.0230 | 0.2656 | 38 |
| 0.1934 | 0.0349 | 0.0670 | 0.6824 | 0.0230 | 0.2630 | 39 |
| 0.1688 | 0.0351 | 0.0542 | 0.6773 | 0.0230 | 0.2625 | 40 |
| 0.1469 | 0.0353 | 0.0429 | 0.6700 | 0.0231 | 0.2633 | 41 |
| 0.1268 | 0.0355 | 0.0365 | 0.6680 | 0.0231 | 0.2578 | 42 |
| 0.1086 | 0.0357 | 0.0284 | 0.6643 | 0.0231 | 0.2540 | 43 |
| 0.0920 | 0.0358 | 0.0221 | 0.6645 | 0.0231 | 0.2530 | 44 |
| 0.0783 | 0.0359 | 0.0169 | 0.6621 | 0.0232 | 0.2540 | 45 |
| 0.0667 | 0.0360 | 0.0121 | 0.6714 | 0.0232 | 0.2532 | 46 |
| 0.0563 | 0.0361 | 0.0094 | 0.6604 | 0.0232 | 0.2503 | 47 |
| 0.0477 | 0.0361 | 0.0072 | 0.6620 | 0.0232 | 0.2489 | 48 |
| 0.0397 | 0.0362 | 0.0055 | 0.6611 | 0.0232 | 0.2502 | 49 |
| 0.0330 | 0.0362 | 0.0045 | 0.6686 | 0.0232 | 0.2496 | 50 |
| 0.0283 | 0.0362 | 0.0033 | 0.6705 | 0.0232 | 0.2503 | 51 |
| 0.0242 | 0.0362 | 0.0034 | 0.6686 | 0.0232 | 0.2486 | 52 |
| 0.0212 | 0.0362 | 0.0031 | 0.6686 | 0.0232 | 0.2493 | 53 |
| 0.0197 | 0.0362 | 0.0028 | 0.6688 | 0.0232 | 0.2530 | 54 |
| 0.0226 | 0.0362 | 0.0041 | 0.6598 | 0.0233 | 0.2451 | 55 |
| 0.0158 | 0.0362 | 0.0024 | 0.6605 | 0.0233 | 0.2428 | 56 |
| 0.0115 | 0.0362 | 0.0018 | 0.6648 | 0.0233 | 0.2435 | 57 |
| 0.0094 | 0.0362 | 0.0017 | 0.6672 | 0.0233 | 0.2446 | 58 |
| 0.0081 | 0.0362 | 0.0018 | 0.6731 | 0.0233 | 0.2439 | 59 |
| 0.0071 | 0.0362 | 0.0017 | 0.6762 | 0.0233 | 0.2429 | 60 |
| 0.0062 | 0.0362 | 0.0017 | 0.6794 | 0.0233 | 0.2426 | 61 |
| 0.0055 | 0.0362 | 0.0017 | 0.6825 | 0.0233 | 0.2429 | 62 |
| 0.0048 | 0.0362 | 0.0017 | 0.6895 | 0.0233 | 0.2450 | 63 |
| 0.0042 | 0.0362 | 0.0019 | 0.6914 | 0.0233 | 0.2424 | 64 |
| 0.0037 | 0.0362 | 0.0018 | 0.6938 | 0.0233 | 0.2423 | 65 |
| 0.0224 | 0.0361 | 0.0080 | 0.6695 | 0.0234 | 0.2409 | 66 |
| 0.0127 | 0.0362 | 0.0037 | 0.6685 | 0.0234 | 0.2383 | 67 |
| 0.0065 | 0.0362 | 0.0017 | 0.6714 | 0.0234 | 0.2359 | 68 |
| 0.0045 | 0.0362 | 0.0017 | 0.6645 | 0.0234 | 0.2347 | 69 |
| 0.0034 | 0.0362 | 0.0016 | 0.6671 | 0.0234 | 0.2353 | 70 |
| 0.0028 | 0.0362 | 0.0014 | 0.6715 | 0.0234 | 0.2354 | 71 |
| 0.0024 | 0.0362 | 0.0014 | 0.6745 | 0.0234 | 0.2358 | 72 |
| 0.0022 | 0.0362 | 0.0014 | 0.6778 | 0.0234 | 0.2356 | 73 |
| 0.0020 | 0.0362 | 0.0013 | 0.6797 | 0.0234 | 0.2357 | 74 |
| 0.0018 | 0.0362 | 0.0014 | 0.6833 | 0.0234 | 0.2355 | 75 |
| 0.0016 | 0.0362 | 0.0013 | 0.6885 | 0.0234 | 0.2363 | 76 |
| 0.0068 | 0.0362 | 0.0035 | 0.7270 | 0.0232 | 0.2500 | 77 |
| 0.0131 | 0.0362 | 0.0076 | 0.6965 | 0.0234 | 0.2397 | 78 |
| 0.0054 | 0.0362 | 0.0088 | 0.6764 | 0.0235 | 0.2339 | 79 |
| 0.0029 | 0.0362 | 0.0041 | 0.6806 | 0.0235 | 0.2334 | 80 |
| 0.0019 | 0.0362 | 0.0039 | 0.6723 | 0.0235 | 0.2316 | 81 |
| 0.0016 | 0.0362 | 0.0028 | 0.6765 | 0.0235 | 0.2315 | 82 |
| 0.0014 | 0.0362 | 0.0025 | 0.6786 | 0.0235 | 0.2306 | 83 |
| 0.0013 | 0.0362 | 0.0023 | 0.6805 | 0.0235 | 0.2304 | 84 |
| 0.0012 | 0.0362 | 0.0022 | 0.6830 | 0.0235 | 0.2301 | 85 |
| 0.0011 | 0.0362 | 0.0022 | 0.6881 | 0.0235 | 0.2308 | 86 |
| 0.0010 | 0.0362 | 0.0022 | 0.6875 | 0.0235 | 0.2303 | 87 |
| 0.0009 | 0.0362 | 0.0022 | 0.6909 | 0.0235 | 0.2307 | 88 |
| 0.0008 | 0.0362 | 0.0020 | 0.6934 | 0.0235 | 0.2299 | 89 |
| 0.0007 | 0.0362 | 0.0022 | 0.6968 | 0.0235 | 0.2307 | 90 |
| 0.0007 | 0.0362 | 0.0020 | 0.7005 | 0.0235 | 0.2300 | 91 |
| 0.0006 | 0.0362 | 0.0021 | 0.7040 | 0.0235 | 0.2307 | 92 |
| 0.0006 | 0.0362 | 0.0020 | 0.7086 | 0.0235 | 0.2309 | 93 |
| 0.0005 | 0.0362 | 0.0020 | 0.7116 | 0.0235 | 0.2318 | 94 |
| 0.0005 | 0.0362 | 0.0018 | 0.7151 | 0.0235 | 0.2305 | 95 |
| 0.0111 | 0.0362 | 0.2014 | 0.7185 | 0.0234 | 0.2861 | 96 |
| 0.0069 | 0.0362 | 0.0051 | 0.7036 | 0.0235 | 0.2337 | 97 |
| 0.0028 | 0.0362 | 0.0015 | 0.6946 | 0.0235 | 0.2324 | 98 |
| 0.0023 | 0.0362 | 0.0018 | 0.6937 | 0.0235 | 0.2295 | 99 |
| 0.0017 | 0.0362 | 0.0013 | 0.6886 | 0.0235 | 0.2283 | 100 |
| 0.0010 | 0.0362 | 0.0008 | 0.6891 | 0.0236 | 0.2274 | 101 |
| 0.0009 | 0.0362 | 0.0013 | 0.6901 | 0.0236 | 0.2275 | 102 |
| 0.0008 | 0.0362 | 0.0015 | 0.6922 | 0.0236 | 0.2273 | 103 |
| 0.0006 | 0.0362 | 0.0015 | 0.6923 | 0.0236 | 0.2274 | 104 |
| 0.0008 | 0.0362 | 0.0014 | 0.6996 | 0.0235 | 0.2288 | 105 |
| 0.0006 | 0.0362 | 0.0014 | 0.6967 | 0.0236 | 0.2266 | 106 |
| 0.0005 | 0.0362 | 0.0013 | 0.6988 | 0.0236 | 0.2260 | 107 |
| 0.0004 | 0.0362 | 0.0027 | 0.7008 | 0.0236 | 0.2278 | 108 |
| 0.0004 | 0.0362 | 0.0017 | 0.7034 | 0.0236 | 0.2261 | 109 |
| 0.0004 | 0.0362 | 0.0018 | 0.7036 | 0.0236 | 0.2265 | 110 |
| 0.0004 | 0.0362 | 0.0015 | 0.7090 | 0.0236 | 0.2255 | 111 |
| 0.0112 | 0.0362 | 0.0059 | 0.7014 | 0.0235 | 0.2271 | 112 |
| 0.0034 | 0.0362 | 0.0023 | 0.6869 | 0.0236 | 0.2252 | 113 |
| 0.0015 | 0.0362 | 0.0015 | 0.6863 | 0.0236 | 0.2234 | 114 |
| 0.0008 | 0.0362 | 0.0010 | 0.6893 | 0.0236 | 0.2227 | 115 |
| 0.0006 | 0.0362 | 0.0011 | 0.6911 | 0.0236 | 0.2232 | 116 |
| 0.0005 | 0.0362 | 0.0009 | 0.6923 | 0.0236 | 0.2227 | 117 |
| 0.0004 | 0.0362 | 0.0009 | 0.6938 | 0.0236 | 0.2225 | 118 |
| 0.0004 | 0.0362 | 0.0010 | 0.6958 | 0.0236 | 0.2226 | 119 |
| 0.0003 | 0.0362 | 0.0010 | 0.6966 | 0.0236 | 0.2226 | 120 |
| 0.0003 | 0.0362 | 0.0010 | 0.6983 | 0.0236 | 0.2230 | 121 |
| 0.0003 | 0.0362 | 0.0010 | 0.7005 | 0.0236 | 0.2229 | 122 |
| 0.0003 | 0.0362 | 0.0010 | 0.7022 | 0.0236 | 0.2233 | 123 |
| 0.0002 | 0.0362 | 0.0010 | 0.7041 | 0.0236 | 0.2226 | 124 |
| 0.0002 | 0.0362 | 0.0011 | 0.7065 | 0.0236 | 0.2228 | 125 |
| 0.0002 | 0.0362 | 0.0011 | 0.7081 | 0.0236 | 0.2227 | 126 |
| 0.0002 | 0.0362 | 0.0011 | 0.7101 | 0.0236 | 0.2224 | 127 |
| 0.0002 | 0.0362 | 0.0011 | 0.7130 | 0.0236 | 0.2224 | 128 |
| 0.0002 | 0.0362 | 0.0011 | 0.7157 | 0.0236 | 0.2229 | 129 |
| 0.0002 | 0.0362 | 0.0011 | 0.7183 | 0.0236 | 0.2225 | 130 |
| 0.0001 | 0.0362 | 0.0011 | 0.7212 | 0.0236 | 0.2230 | 131 |
| 0.0001 | 0.0362 | 0.0012 | 0.7250 | 0.0236 | 0.2230 | 132 |
| 0.0001 | 0.0362 | 0.0012 | 0.7268 | 0.0236 | 0.2229 | 133 |
| 0.0001 | 0.0362 | 0.0011 | 0.7303 | 0.0236 | 0.2229 | 134 |
| 0.0001 | 0.0362 | 0.0012 | 0.7350 | 0.0236 | 0.2236 | 135 |
| 0.0001 | 0.0362 | 0.0012 | 0.7386 | 0.0236 | 0.2240 | 136 |
| 0.0001 | 0.0362 | 0.0012 | 0.7422 | 0.0236 | 0.2231 | 137 |
| 0.0001 | 0.0362 | 0.0013 | 0.7445 | 0.0236 | 0.2236 | 138 |
| 0.0001 | 0.0362 | 0.0012 | 0.7500 | 0.0236 | 0.2243 | 139 |
| 0.0112 | 0.0361 | 0.0117 | 0.7391 | 0.0235 | 0.2370 | 140 |
| 0.0036 | 0.0362 | 0.0041 | 0.7201 | 0.0236 | 0.2277 | 141 |
| 0.0011 | 0.0362 | 0.0032 | 0.7210 | 0.0236 | 0.2243 | 142 |
| 0.0006 | 0.0362 | 0.0030 | 0.7199 | 0.0236 | 0.2269 | 143 |
| 0.0003 | 0.0362 | 0.0019 | 0.7231 | 0.0236 | 0.2254 | 144 |
| 0.0002 | 0.0362 | 0.0021 | 0.7179 | 0.0236 | 0.2228 | 145 |
| 0.0002 | 0.0362 | 0.0020 | 0.7236 | 0.0236 | 0.2234 | 146 |
| 0.0002 | 0.0362 | 0.0021 | 0.7271 | 0.0236 | 0.2254 | 147 |
| 0.0002 | 0.0362 | 0.0022 | 0.7250 | 0.0236 | 0.2233 | 148 |
| 0.0001 | 0.0362 | 0.0021 | 0.7255 | 0.0236 | 0.2230 | 149 |
| 0.0001 | 0.0362 | 0.0020 | 0.7263 | 0.0236 | 0.2228 | 150 |
| 0.0001 | 0.0362 | 0.0021 | 0.7278 | 0.0236 | 0.2226 | 151 |
| 0.0001 | 0.0362 | 0.0021 | 0.7289 | 0.0237 | 0.2220 | 152 |
| 0.0001 | 0.0362 | 0.0020 | 0.7301 | 0.0237 | 0.2214 | 153 |
| 0.0001 | 0.0362 | 0.0020 | 0.7307 | 0.0237 | 0.2216 | 154 |
| 0.0001 | 0.0362 | 0.0020 | 0.7329 | 0.0237 | 0.2217 | 155 |
| 0.0001 | 0.0362 | 0.0020 | 0.7339 | 0.0237 | 0.2211 | 156 |
| 0.0001 | 0.0362 | 0.0020 | 0.7354 | 0.0237 | 0.2210 | 157 |
| 0.0001 | 0.0362 | 0.0020 | 0.7374 | 0.0237 | 0.2207 | 158 |
| 0.0001 | 0.0362 | 0.0020 | 0.7394 | 0.0237 | 0.2211 | 159 |
| 0.0001 | 0.0362 | 0.0020 | 0.7406 | 0.0237 | 0.2212 | 160 |
| 0.0001 | 0.0362 | 0.0021 | 0.7422 | 0.0237 | 0.2213 | 161 |
| 0.0001 | 0.0362 | 0.0020 | 0.7446 | 0.0237 | 0.2207 | 162 |
| 0.0001 | 0.0362 | 0.0020 | 0.7471 | 0.0237 | 0.2209 | 163 |
| 0.0000 | 0.0362 | 0.0020 | 0.7502 | 0.0237 | 0.2206 | 164 |
| 0.0000 | 0.0362 | 0.0021 | 0.7518 | 0.0237 | 0.2210 | 165 |
| 0.0000 | 0.0362 | 0.0021 | 0.7533 | 0.0237 | 0.2207 | 166 |
| 0.0000 | 0.0362 | 0.0021 | 0.7566 | 0.0237 | 0.2204 | 167 |
| 0.0000 | 0.0362 | 0.0021 | 0.7590 | 0.0237 | 0.2203 | 168 |
| 0.0000 | 0.0362 | 0.0022 | 0.7617 | 0.0237 | 0.2208 | 169 |
| 0.0000 | 0.0362 | 0.0022 | 0.7644 | 0.0237 | 0.2207 | 170 |
| 0.0000 | 0.0362 | 0.0022 | 0.7685 | 0.0237 | 0.2206 | 171 |
| 0.0000 | 0.0362 | 0.0022 | 0.7710 | 0.0237 | 0.2203 | 172 |
| 0.0000 | 0.0362 | 0.0022 | 0.7757 | 0.0236 | 0.2212 | 173 |
| 0.0000 | 0.0362 | 0.0023 | 0.7803 | 0.0236 | 0.2214 | 174 |
| 0.0000 | 0.0362 | 0.0024 | 0.7834 | 0.0236 | 0.2210 | 175 |
| 0.0000 | 0.0362 | 0.0024 | 0.7863 | 0.0237 | 0.2209 | 176 |
| 0.0000 | 0.0362 | 0.0024 | 0.7909 | 0.0236 | 0.2214 | 177 |
| 0.0000 | 0.0362 | 0.0024 | 0.7940 | 0.0237 | 0.2208 | 178 |
| 0.0000 | 0.0362 | 0.0025 | 0.7999 | 0.0236 | 0.2214 | 179 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
bigmorning/whisper_syl_cv12_pad_lob100_low__0175
|
bigmorning
| 2023-08-26T00:04:34Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-26T00:04:27Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100_low__0175
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100_low__0175
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0362
- Train Wermet: 0.0023
- Validation Loss: 0.7803
- Validation Accuracy: 0.0236
- Validation Wermet: 0.2214
- Epoch: 174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.2930 | 0.0113 | 2.0658 | 3.9415 | 0.0117 | 0.9401 | 0 |
| 4.6215 | 0.0121 | 0.8917 | 3.7803 | 0.0120 | 0.9294 | 1 |
| 4.4086 | 0.0128 | 0.8403 | 3.6070 | 0.0124 | 0.9223 | 2 |
| 4.1842 | 0.0135 | 0.8337 | 3.4291 | 0.0128 | 0.8867 | 3 |
| 3.9981 | 0.0141 | 0.8182 | 3.3251 | 0.0131 | 0.8750 | 4 |
| 3.8531 | 0.0145 | 0.8058 | 3.2385 | 0.0133 | 0.8699 | 5 |
| 3.7345 | 0.0149 | 0.7925 | 3.1751 | 0.0134 | 0.8665 | 6 |
| 3.6307 | 0.0152 | 0.7851 | 3.1031 | 0.0136 | 0.8507 | 7 |
| 3.5437 | 0.0155 | 0.7717 | 3.0752 | 0.0138 | 0.8286 | 8 |
| 3.4649 | 0.0157 | 0.7651 | 3.0334 | 0.0139 | 0.8417 | 9 |
| 3.3926 | 0.0159 | 0.7531 | 3.0022 | 0.0139 | 0.8413 | 10 |
| 3.3262 | 0.0162 | 0.7462 | 2.9669 | 0.0140 | 0.8264 | 11 |
| 3.2625 | 0.0164 | 0.7367 | 2.9342 | 0.0141 | 0.8520 | 12 |
| 3.1979 | 0.0166 | 0.7231 | 2.9046 | 0.0144 | 0.8196 | 13 |
| 3.1319 | 0.0169 | 0.7133 | 2.8607 | 0.0145 | 0.8026 | 14 |
| 3.0616 | 0.0172 | 0.7007 | 2.8165 | 0.0146 | 0.7788 | 15 |
| 2.9792 | 0.0176 | 0.6816 | 2.7552 | 0.0149 | 0.7643 | 16 |
| 2.8905 | 0.0180 | 0.6641 | 2.6788 | 0.0151 | 0.7473 | 17 |
| 2.7749 | 0.0186 | 0.6424 | 2.5824 | 0.0155 | 0.7241 | 18 |
| 2.6263 | 0.0193 | 0.6159 | 2.4206 | 0.0161 | 0.7047 | 19 |
| 2.4352 | 0.0203 | 0.5829 | 2.2230 | 0.0168 | 0.6500 | 20 |
| 2.1941 | 0.0216 | 0.5411 | 2.0349 | 0.0175 | 0.5980 | 21 |
| 1.9184 | 0.0231 | 0.4922 | 1.7850 | 0.0184 | 0.5659 | 22 |
| 1.6174 | 0.0249 | 0.4371 | 1.5664 | 0.0192 | 0.5081 | 23 |
| 1.3542 | 0.0265 | 0.3851 | 1.3992 | 0.0199 | 0.4690 | 24 |
| 1.1499 | 0.0278 | 0.3408 | 1.2512 | 0.0205 | 0.4299 | 25 |
| 0.9878 | 0.0288 | 0.3029 | 1.1479 | 0.0209 | 0.4013 | 26 |
| 0.8600 | 0.0297 | 0.2735 | 1.0527 | 0.0213 | 0.3755 | 27 |
| 0.7516 | 0.0305 | 0.2441 | 0.9803 | 0.0216 | 0.3570 | 28 |
| 0.6626 | 0.0311 | 0.2197 | 0.9314 | 0.0219 | 0.3416 | 29 |
| 0.5863 | 0.0316 | 0.1993 | 0.8730 | 0.0221 | 0.3238 | 30 |
| 0.5187 | 0.0321 | 0.1775 | 0.8357 | 0.0223 | 0.3136 | 31 |
| 0.4608 | 0.0326 | 0.1610 | 0.8059 | 0.0224 | 0.3033 | 32 |
| 0.4087 | 0.0330 | 0.1467 | 0.7746 | 0.0226 | 0.2949 | 33 |
| 0.3642 | 0.0334 | 0.1298 | 0.7476 | 0.0227 | 0.2847 | 34 |
| 0.3221 | 0.0337 | 0.1168 | 0.7330 | 0.0228 | 0.2802 | 35 |
| 0.2837 | 0.0340 | 0.1030 | 0.7093 | 0.0229 | 0.2728 | 36 |
| 0.2509 | 0.0343 | 0.0882 | 0.6941 | 0.0229 | 0.2687 | 37 |
| 0.2209 | 0.0346 | 0.0747 | 0.6892 | 0.0230 | 0.2656 | 38 |
| 0.1934 | 0.0349 | 0.0670 | 0.6824 | 0.0230 | 0.2630 | 39 |
| 0.1688 | 0.0351 | 0.0542 | 0.6773 | 0.0230 | 0.2625 | 40 |
| 0.1469 | 0.0353 | 0.0429 | 0.6700 | 0.0231 | 0.2633 | 41 |
| 0.1268 | 0.0355 | 0.0365 | 0.6680 | 0.0231 | 0.2578 | 42 |
| 0.1086 | 0.0357 | 0.0284 | 0.6643 | 0.0231 | 0.2540 | 43 |
| 0.0920 | 0.0358 | 0.0221 | 0.6645 | 0.0231 | 0.2530 | 44 |
| 0.0783 | 0.0359 | 0.0169 | 0.6621 | 0.0232 | 0.2540 | 45 |
| 0.0667 | 0.0360 | 0.0121 | 0.6714 | 0.0232 | 0.2532 | 46 |
| 0.0563 | 0.0361 | 0.0094 | 0.6604 | 0.0232 | 0.2503 | 47 |
| 0.0477 | 0.0361 | 0.0072 | 0.6620 | 0.0232 | 0.2489 | 48 |
| 0.0397 | 0.0362 | 0.0055 | 0.6611 | 0.0232 | 0.2502 | 49 |
| 0.0330 | 0.0362 | 0.0045 | 0.6686 | 0.0232 | 0.2496 | 50 |
| 0.0283 | 0.0362 | 0.0033 | 0.6705 | 0.0232 | 0.2503 | 51 |
| 0.0242 | 0.0362 | 0.0034 | 0.6686 | 0.0232 | 0.2486 | 52 |
| 0.0212 | 0.0362 | 0.0031 | 0.6686 | 0.0232 | 0.2493 | 53 |
| 0.0197 | 0.0362 | 0.0028 | 0.6688 | 0.0232 | 0.2530 | 54 |
| 0.0226 | 0.0362 | 0.0041 | 0.6598 | 0.0233 | 0.2451 | 55 |
| 0.0158 | 0.0362 | 0.0024 | 0.6605 | 0.0233 | 0.2428 | 56 |
| 0.0115 | 0.0362 | 0.0018 | 0.6648 | 0.0233 | 0.2435 | 57 |
| 0.0094 | 0.0362 | 0.0017 | 0.6672 | 0.0233 | 0.2446 | 58 |
| 0.0081 | 0.0362 | 0.0018 | 0.6731 | 0.0233 | 0.2439 | 59 |
| 0.0071 | 0.0362 | 0.0017 | 0.6762 | 0.0233 | 0.2429 | 60 |
| 0.0062 | 0.0362 | 0.0017 | 0.6794 | 0.0233 | 0.2426 | 61 |
| 0.0055 | 0.0362 | 0.0017 | 0.6825 | 0.0233 | 0.2429 | 62 |
| 0.0048 | 0.0362 | 0.0017 | 0.6895 | 0.0233 | 0.2450 | 63 |
| 0.0042 | 0.0362 | 0.0019 | 0.6914 | 0.0233 | 0.2424 | 64 |
| 0.0037 | 0.0362 | 0.0018 | 0.6938 | 0.0233 | 0.2423 | 65 |
| 0.0224 | 0.0361 | 0.0080 | 0.6695 | 0.0234 | 0.2409 | 66 |
| 0.0127 | 0.0362 | 0.0037 | 0.6685 | 0.0234 | 0.2383 | 67 |
| 0.0065 | 0.0362 | 0.0017 | 0.6714 | 0.0234 | 0.2359 | 68 |
| 0.0045 | 0.0362 | 0.0017 | 0.6645 | 0.0234 | 0.2347 | 69 |
| 0.0034 | 0.0362 | 0.0016 | 0.6671 | 0.0234 | 0.2353 | 70 |
| 0.0028 | 0.0362 | 0.0014 | 0.6715 | 0.0234 | 0.2354 | 71 |
| 0.0024 | 0.0362 | 0.0014 | 0.6745 | 0.0234 | 0.2358 | 72 |
| 0.0022 | 0.0362 | 0.0014 | 0.6778 | 0.0234 | 0.2356 | 73 |
| 0.0020 | 0.0362 | 0.0013 | 0.6797 | 0.0234 | 0.2357 | 74 |
| 0.0018 | 0.0362 | 0.0014 | 0.6833 | 0.0234 | 0.2355 | 75 |
| 0.0016 | 0.0362 | 0.0013 | 0.6885 | 0.0234 | 0.2363 | 76 |
| 0.0068 | 0.0362 | 0.0035 | 0.7270 | 0.0232 | 0.2500 | 77 |
| 0.0131 | 0.0362 | 0.0076 | 0.6965 | 0.0234 | 0.2397 | 78 |
| 0.0054 | 0.0362 | 0.0088 | 0.6764 | 0.0235 | 0.2339 | 79 |
| 0.0029 | 0.0362 | 0.0041 | 0.6806 | 0.0235 | 0.2334 | 80 |
| 0.0019 | 0.0362 | 0.0039 | 0.6723 | 0.0235 | 0.2316 | 81 |
| 0.0016 | 0.0362 | 0.0028 | 0.6765 | 0.0235 | 0.2315 | 82 |
| 0.0014 | 0.0362 | 0.0025 | 0.6786 | 0.0235 | 0.2306 | 83 |
| 0.0013 | 0.0362 | 0.0023 | 0.6805 | 0.0235 | 0.2304 | 84 |
| 0.0012 | 0.0362 | 0.0022 | 0.6830 | 0.0235 | 0.2301 | 85 |
| 0.0011 | 0.0362 | 0.0022 | 0.6881 | 0.0235 | 0.2308 | 86 |
| 0.0010 | 0.0362 | 0.0022 | 0.6875 | 0.0235 | 0.2303 | 87 |
| 0.0009 | 0.0362 | 0.0022 | 0.6909 | 0.0235 | 0.2307 | 88 |
| 0.0008 | 0.0362 | 0.0020 | 0.6934 | 0.0235 | 0.2299 | 89 |
| 0.0007 | 0.0362 | 0.0022 | 0.6968 | 0.0235 | 0.2307 | 90 |
| 0.0007 | 0.0362 | 0.0020 | 0.7005 | 0.0235 | 0.2300 | 91 |
| 0.0006 | 0.0362 | 0.0021 | 0.7040 | 0.0235 | 0.2307 | 92 |
| 0.0006 | 0.0362 | 0.0020 | 0.7086 | 0.0235 | 0.2309 | 93 |
| 0.0005 | 0.0362 | 0.0020 | 0.7116 | 0.0235 | 0.2318 | 94 |
| 0.0005 | 0.0362 | 0.0018 | 0.7151 | 0.0235 | 0.2305 | 95 |
| 0.0111 | 0.0362 | 0.2014 | 0.7185 | 0.0234 | 0.2861 | 96 |
| 0.0069 | 0.0362 | 0.0051 | 0.7036 | 0.0235 | 0.2337 | 97 |
| 0.0028 | 0.0362 | 0.0015 | 0.6946 | 0.0235 | 0.2324 | 98 |
| 0.0023 | 0.0362 | 0.0018 | 0.6937 | 0.0235 | 0.2295 | 99 |
| 0.0017 | 0.0362 | 0.0013 | 0.6886 | 0.0235 | 0.2283 | 100 |
| 0.0010 | 0.0362 | 0.0008 | 0.6891 | 0.0236 | 0.2274 | 101 |
| 0.0009 | 0.0362 | 0.0013 | 0.6901 | 0.0236 | 0.2275 | 102 |
| 0.0008 | 0.0362 | 0.0015 | 0.6922 | 0.0236 | 0.2273 | 103 |
| 0.0006 | 0.0362 | 0.0015 | 0.6923 | 0.0236 | 0.2274 | 104 |
| 0.0008 | 0.0362 | 0.0014 | 0.6996 | 0.0235 | 0.2288 | 105 |
| 0.0006 | 0.0362 | 0.0014 | 0.6967 | 0.0236 | 0.2266 | 106 |
| 0.0005 | 0.0362 | 0.0013 | 0.6988 | 0.0236 | 0.2260 | 107 |
| 0.0004 | 0.0362 | 0.0027 | 0.7008 | 0.0236 | 0.2278 | 108 |
| 0.0004 | 0.0362 | 0.0017 | 0.7034 | 0.0236 | 0.2261 | 109 |
| 0.0004 | 0.0362 | 0.0018 | 0.7036 | 0.0236 | 0.2265 | 110 |
| 0.0004 | 0.0362 | 0.0015 | 0.7090 | 0.0236 | 0.2255 | 111 |
| 0.0112 | 0.0362 | 0.0059 | 0.7014 | 0.0235 | 0.2271 | 112 |
| 0.0034 | 0.0362 | 0.0023 | 0.6869 | 0.0236 | 0.2252 | 113 |
| 0.0015 | 0.0362 | 0.0015 | 0.6863 | 0.0236 | 0.2234 | 114 |
| 0.0008 | 0.0362 | 0.0010 | 0.6893 | 0.0236 | 0.2227 | 115 |
| 0.0006 | 0.0362 | 0.0011 | 0.6911 | 0.0236 | 0.2232 | 116 |
| 0.0005 | 0.0362 | 0.0009 | 0.6923 | 0.0236 | 0.2227 | 117 |
| 0.0004 | 0.0362 | 0.0009 | 0.6938 | 0.0236 | 0.2225 | 118 |
| 0.0004 | 0.0362 | 0.0010 | 0.6958 | 0.0236 | 0.2226 | 119 |
| 0.0003 | 0.0362 | 0.0010 | 0.6966 | 0.0236 | 0.2226 | 120 |
| 0.0003 | 0.0362 | 0.0010 | 0.6983 | 0.0236 | 0.2230 | 121 |
| 0.0003 | 0.0362 | 0.0010 | 0.7005 | 0.0236 | 0.2229 | 122 |
| 0.0003 | 0.0362 | 0.0010 | 0.7022 | 0.0236 | 0.2233 | 123 |
| 0.0002 | 0.0362 | 0.0010 | 0.7041 | 0.0236 | 0.2226 | 124 |
| 0.0002 | 0.0362 | 0.0011 | 0.7065 | 0.0236 | 0.2228 | 125 |
| 0.0002 | 0.0362 | 0.0011 | 0.7081 | 0.0236 | 0.2227 | 126 |
| 0.0002 | 0.0362 | 0.0011 | 0.7101 | 0.0236 | 0.2224 | 127 |
| 0.0002 | 0.0362 | 0.0011 | 0.7130 | 0.0236 | 0.2224 | 128 |
| 0.0002 | 0.0362 | 0.0011 | 0.7157 | 0.0236 | 0.2229 | 129 |
| 0.0002 | 0.0362 | 0.0011 | 0.7183 | 0.0236 | 0.2225 | 130 |
| 0.0001 | 0.0362 | 0.0011 | 0.7212 | 0.0236 | 0.2230 | 131 |
| 0.0001 | 0.0362 | 0.0012 | 0.7250 | 0.0236 | 0.2230 | 132 |
| 0.0001 | 0.0362 | 0.0012 | 0.7268 | 0.0236 | 0.2229 | 133 |
| 0.0001 | 0.0362 | 0.0011 | 0.7303 | 0.0236 | 0.2229 | 134 |
| 0.0001 | 0.0362 | 0.0012 | 0.7350 | 0.0236 | 0.2236 | 135 |
| 0.0001 | 0.0362 | 0.0012 | 0.7386 | 0.0236 | 0.2240 | 136 |
| 0.0001 | 0.0362 | 0.0012 | 0.7422 | 0.0236 | 0.2231 | 137 |
| 0.0001 | 0.0362 | 0.0013 | 0.7445 | 0.0236 | 0.2236 | 138 |
| 0.0001 | 0.0362 | 0.0012 | 0.7500 | 0.0236 | 0.2243 | 139 |
| 0.0112 | 0.0361 | 0.0117 | 0.7391 | 0.0235 | 0.2370 | 140 |
| 0.0036 | 0.0362 | 0.0041 | 0.7201 | 0.0236 | 0.2277 | 141 |
| 0.0011 | 0.0362 | 0.0032 | 0.7210 | 0.0236 | 0.2243 | 142 |
| 0.0006 | 0.0362 | 0.0030 | 0.7199 | 0.0236 | 0.2269 | 143 |
| 0.0003 | 0.0362 | 0.0019 | 0.7231 | 0.0236 | 0.2254 | 144 |
| 0.0002 | 0.0362 | 0.0021 | 0.7179 | 0.0236 | 0.2228 | 145 |
| 0.0002 | 0.0362 | 0.0020 | 0.7236 | 0.0236 | 0.2234 | 146 |
| 0.0002 | 0.0362 | 0.0021 | 0.7271 | 0.0236 | 0.2254 | 147 |
| 0.0002 | 0.0362 | 0.0022 | 0.7250 | 0.0236 | 0.2233 | 148 |
| 0.0001 | 0.0362 | 0.0021 | 0.7255 | 0.0236 | 0.2230 | 149 |
| 0.0001 | 0.0362 | 0.0020 | 0.7263 | 0.0236 | 0.2228 | 150 |
| 0.0001 | 0.0362 | 0.0021 | 0.7278 | 0.0236 | 0.2226 | 151 |
| 0.0001 | 0.0362 | 0.0021 | 0.7289 | 0.0237 | 0.2220 | 152 |
| 0.0001 | 0.0362 | 0.0020 | 0.7301 | 0.0237 | 0.2214 | 153 |
| 0.0001 | 0.0362 | 0.0020 | 0.7307 | 0.0237 | 0.2216 | 154 |
| 0.0001 | 0.0362 | 0.0020 | 0.7329 | 0.0237 | 0.2217 | 155 |
| 0.0001 | 0.0362 | 0.0020 | 0.7339 | 0.0237 | 0.2211 | 156 |
| 0.0001 | 0.0362 | 0.0020 | 0.7354 | 0.0237 | 0.2210 | 157 |
| 0.0001 | 0.0362 | 0.0020 | 0.7374 | 0.0237 | 0.2207 | 158 |
| 0.0001 | 0.0362 | 0.0020 | 0.7394 | 0.0237 | 0.2211 | 159 |
| 0.0001 | 0.0362 | 0.0020 | 0.7406 | 0.0237 | 0.2212 | 160 |
| 0.0001 | 0.0362 | 0.0021 | 0.7422 | 0.0237 | 0.2213 | 161 |
| 0.0001 | 0.0362 | 0.0020 | 0.7446 | 0.0237 | 0.2207 | 162 |
| 0.0001 | 0.0362 | 0.0020 | 0.7471 | 0.0237 | 0.2209 | 163 |
| 0.0000 | 0.0362 | 0.0020 | 0.7502 | 0.0237 | 0.2206 | 164 |
| 0.0000 | 0.0362 | 0.0021 | 0.7518 | 0.0237 | 0.2210 | 165 |
| 0.0000 | 0.0362 | 0.0021 | 0.7533 | 0.0237 | 0.2207 | 166 |
| 0.0000 | 0.0362 | 0.0021 | 0.7566 | 0.0237 | 0.2204 | 167 |
| 0.0000 | 0.0362 | 0.0021 | 0.7590 | 0.0237 | 0.2203 | 168 |
| 0.0000 | 0.0362 | 0.0022 | 0.7617 | 0.0237 | 0.2208 | 169 |
| 0.0000 | 0.0362 | 0.0022 | 0.7644 | 0.0237 | 0.2207 | 170 |
| 0.0000 | 0.0362 | 0.0022 | 0.7685 | 0.0237 | 0.2206 | 171 |
| 0.0000 | 0.0362 | 0.0022 | 0.7710 | 0.0237 | 0.2203 | 172 |
| 0.0000 | 0.0362 | 0.0022 | 0.7757 | 0.0236 | 0.2212 | 173 |
| 0.0000 | 0.0362 | 0.0023 | 0.7803 | 0.0236 | 0.2214 | 174 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
jondurbin/airoboros-l2-70b-2.1-peft
|
jondurbin
| 2023-08-26T00:00:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-25T21:57:37Z |
Peft model for https://hf.co/jondurbin/airoboros-l2-70b-2.1
|
debadas/ronaldo_longer
|
debadas
| 2023-08-25T23:50:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-25T23:42:11Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - debadas/ronaldo_longer
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
bigmorning/whisper_syl_cv12_pad_lob100_low__0165
|
bigmorning
| 2023-08-25T23:38:24Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-25T23:38:17Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_syl_cv12_pad_lob100_low__0165
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_syl_cv12_pad_lob100_low__0165
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Train Accuracy: 0.0362
- Train Wermet: 0.0020
- Validation Loss: 0.7502
- Validation Accuracy: 0.0237
- Validation Wermet: 0.2206
- Epoch: 164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.2930 | 0.0113 | 2.0658 | 3.9415 | 0.0117 | 0.9401 | 0 |
| 4.6215 | 0.0121 | 0.8917 | 3.7803 | 0.0120 | 0.9294 | 1 |
| 4.4086 | 0.0128 | 0.8403 | 3.6070 | 0.0124 | 0.9223 | 2 |
| 4.1842 | 0.0135 | 0.8337 | 3.4291 | 0.0128 | 0.8867 | 3 |
| 3.9981 | 0.0141 | 0.8182 | 3.3251 | 0.0131 | 0.8750 | 4 |
| 3.8531 | 0.0145 | 0.8058 | 3.2385 | 0.0133 | 0.8699 | 5 |
| 3.7345 | 0.0149 | 0.7925 | 3.1751 | 0.0134 | 0.8665 | 6 |
| 3.6307 | 0.0152 | 0.7851 | 3.1031 | 0.0136 | 0.8507 | 7 |
| 3.5437 | 0.0155 | 0.7717 | 3.0752 | 0.0138 | 0.8286 | 8 |
| 3.4649 | 0.0157 | 0.7651 | 3.0334 | 0.0139 | 0.8417 | 9 |
| 3.3926 | 0.0159 | 0.7531 | 3.0022 | 0.0139 | 0.8413 | 10 |
| 3.3262 | 0.0162 | 0.7462 | 2.9669 | 0.0140 | 0.8264 | 11 |
| 3.2625 | 0.0164 | 0.7367 | 2.9342 | 0.0141 | 0.8520 | 12 |
| 3.1979 | 0.0166 | 0.7231 | 2.9046 | 0.0144 | 0.8196 | 13 |
| 3.1319 | 0.0169 | 0.7133 | 2.8607 | 0.0145 | 0.8026 | 14 |
| 3.0616 | 0.0172 | 0.7007 | 2.8165 | 0.0146 | 0.7788 | 15 |
| 2.9792 | 0.0176 | 0.6816 | 2.7552 | 0.0149 | 0.7643 | 16 |
| 2.8905 | 0.0180 | 0.6641 | 2.6788 | 0.0151 | 0.7473 | 17 |
| 2.7749 | 0.0186 | 0.6424 | 2.5824 | 0.0155 | 0.7241 | 18 |
| 2.6263 | 0.0193 | 0.6159 | 2.4206 | 0.0161 | 0.7047 | 19 |
| 2.4352 | 0.0203 | 0.5829 | 2.2230 | 0.0168 | 0.6500 | 20 |
| 2.1941 | 0.0216 | 0.5411 | 2.0349 | 0.0175 | 0.5980 | 21 |
| 1.9184 | 0.0231 | 0.4922 | 1.7850 | 0.0184 | 0.5659 | 22 |
| 1.6174 | 0.0249 | 0.4371 | 1.5664 | 0.0192 | 0.5081 | 23 |
| 1.3542 | 0.0265 | 0.3851 | 1.3992 | 0.0199 | 0.4690 | 24 |
| 1.1499 | 0.0278 | 0.3408 | 1.2512 | 0.0205 | 0.4299 | 25 |
| 0.9878 | 0.0288 | 0.3029 | 1.1479 | 0.0209 | 0.4013 | 26 |
| 0.8600 | 0.0297 | 0.2735 | 1.0527 | 0.0213 | 0.3755 | 27 |
| 0.7516 | 0.0305 | 0.2441 | 0.9803 | 0.0216 | 0.3570 | 28 |
| 0.6626 | 0.0311 | 0.2197 | 0.9314 | 0.0219 | 0.3416 | 29 |
| 0.5863 | 0.0316 | 0.1993 | 0.8730 | 0.0221 | 0.3238 | 30 |
| 0.5187 | 0.0321 | 0.1775 | 0.8357 | 0.0223 | 0.3136 | 31 |
| 0.4608 | 0.0326 | 0.1610 | 0.8059 | 0.0224 | 0.3033 | 32 |
| 0.4087 | 0.0330 | 0.1467 | 0.7746 | 0.0226 | 0.2949 | 33 |
| 0.3642 | 0.0334 | 0.1298 | 0.7476 | 0.0227 | 0.2847 | 34 |
| 0.3221 | 0.0337 | 0.1168 | 0.7330 | 0.0228 | 0.2802 | 35 |
| 0.2837 | 0.0340 | 0.1030 | 0.7093 | 0.0229 | 0.2728 | 36 |
| 0.2509 | 0.0343 | 0.0882 | 0.6941 | 0.0229 | 0.2687 | 37 |
| 0.2209 | 0.0346 | 0.0747 | 0.6892 | 0.0230 | 0.2656 | 38 |
| 0.1934 | 0.0349 | 0.0670 | 0.6824 | 0.0230 | 0.2630 | 39 |
| 0.1688 | 0.0351 | 0.0542 | 0.6773 | 0.0230 | 0.2625 | 40 |
| 0.1469 | 0.0353 | 0.0429 | 0.6700 | 0.0231 | 0.2633 | 41 |
| 0.1268 | 0.0355 | 0.0365 | 0.6680 | 0.0231 | 0.2578 | 42 |
| 0.1086 | 0.0357 | 0.0284 | 0.6643 | 0.0231 | 0.2540 | 43 |
| 0.0920 | 0.0358 | 0.0221 | 0.6645 | 0.0231 | 0.2530 | 44 |
| 0.0783 | 0.0359 | 0.0169 | 0.6621 | 0.0232 | 0.2540 | 45 |
| 0.0667 | 0.0360 | 0.0121 | 0.6714 | 0.0232 | 0.2532 | 46 |
| 0.0563 | 0.0361 | 0.0094 | 0.6604 | 0.0232 | 0.2503 | 47 |
| 0.0477 | 0.0361 | 0.0072 | 0.6620 | 0.0232 | 0.2489 | 48 |
| 0.0397 | 0.0362 | 0.0055 | 0.6611 | 0.0232 | 0.2502 | 49 |
| 0.0330 | 0.0362 | 0.0045 | 0.6686 | 0.0232 | 0.2496 | 50 |
| 0.0283 | 0.0362 | 0.0033 | 0.6705 | 0.0232 | 0.2503 | 51 |
| 0.0242 | 0.0362 | 0.0034 | 0.6686 | 0.0232 | 0.2486 | 52 |
| 0.0212 | 0.0362 | 0.0031 | 0.6686 | 0.0232 | 0.2493 | 53 |
| 0.0197 | 0.0362 | 0.0028 | 0.6688 | 0.0232 | 0.2530 | 54 |
| 0.0226 | 0.0362 | 0.0041 | 0.6598 | 0.0233 | 0.2451 | 55 |
| 0.0158 | 0.0362 | 0.0024 | 0.6605 | 0.0233 | 0.2428 | 56 |
| 0.0115 | 0.0362 | 0.0018 | 0.6648 | 0.0233 | 0.2435 | 57 |
| 0.0094 | 0.0362 | 0.0017 | 0.6672 | 0.0233 | 0.2446 | 58 |
| 0.0081 | 0.0362 | 0.0018 | 0.6731 | 0.0233 | 0.2439 | 59 |
| 0.0071 | 0.0362 | 0.0017 | 0.6762 | 0.0233 | 0.2429 | 60 |
| 0.0062 | 0.0362 | 0.0017 | 0.6794 | 0.0233 | 0.2426 | 61 |
| 0.0055 | 0.0362 | 0.0017 | 0.6825 | 0.0233 | 0.2429 | 62 |
| 0.0048 | 0.0362 | 0.0017 | 0.6895 | 0.0233 | 0.2450 | 63 |
| 0.0042 | 0.0362 | 0.0019 | 0.6914 | 0.0233 | 0.2424 | 64 |
| 0.0037 | 0.0362 | 0.0018 | 0.6938 | 0.0233 | 0.2423 | 65 |
| 0.0224 | 0.0361 | 0.0080 | 0.6695 | 0.0234 | 0.2409 | 66 |
| 0.0127 | 0.0362 | 0.0037 | 0.6685 | 0.0234 | 0.2383 | 67 |
| 0.0065 | 0.0362 | 0.0017 | 0.6714 | 0.0234 | 0.2359 | 68 |
| 0.0045 | 0.0362 | 0.0017 | 0.6645 | 0.0234 | 0.2347 | 69 |
| 0.0034 | 0.0362 | 0.0016 | 0.6671 | 0.0234 | 0.2353 | 70 |
| 0.0028 | 0.0362 | 0.0014 | 0.6715 | 0.0234 | 0.2354 | 71 |
| 0.0024 | 0.0362 | 0.0014 | 0.6745 | 0.0234 | 0.2358 | 72 |
| 0.0022 | 0.0362 | 0.0014 | 0.6778 | 0.0234 | 0.2356 | 73 |
| 0.0020 | 0.0362 | 0.0013 | 0.6797 | 0.0234 | 0.2357 | 74 |
| 0.0018 | 0.0362 | 0.0014 | 0.6833 | 0.0234 | 0.2355 | 75 |
| 0.0016 | 0.0362 | 0.0013 | 0.6885 | 0.0234 | 0.2363 | 76 |
| 0.0068 | 0.0362 | 0.0035 | 0.7270 | 0.0232 | 0.2500 | 77 |
| 0.0131 | 0.0362 | 0.0076 | 0.6965 | 0.0234 | 0.2397 | 78 |
| 0.0054 | 0.0362 | 0.0088 | 0.6764 | 0.0235 | 0.2339 | 79 |
| 0.0029 | 0.0362 | 0.0041 | 0.6806 | 0.0235 | 0.2334 | 80 |
| 0.0019 | 0.0362 | 0.0039 | 0.6723 | 0.0235 | 0.2316 | 81 |
| 0.0016 | 0.0362 | 0.0028 | 0.6765 | 0.0235 | 0.2315 | 82 |
| 0.0014 | 0.0362 | 0.0025 | 0.6786 | 0.0235 | 0.2306 | 83 |
| 0.0013 | 0.0362 | 0.0023 | 0.6805 | 0.0235 | 0.2304 | 84 |
| 0.0012 | 0.0362 | 0.0022 | 0.6830 | 0.0235 | 0.2301 | 85 |
| 0.0011 | 0.0362 | 0.0022 | 0.6881 | 0.0235 | 0.2308 | 86 |
| 0.0010 | 0.0362 | 0.0022 | 0.6875 | 0.0235 | 0.2303 | 87 |
| 0.0009 | 0.0362 | 0.0022 | 0.6909 | 0.0235 | 0.2307 | 88 |
| 0.0008 | 0.0362 | 0.0020 | 0.6934 | 0.0235 | 0.2299 | 89 |
| 0.0007 | 0.0362 | 0.0022 | 0.6968 | 0.0235 | 0.2307 | 90 |
| 0.0007 | 0.0362 | 0.0020 | 0.7005 | 0.0235 | 0.2300 | 91 |
| 0.0006 | 0.0362 | 0.0021 | 0.7040 | 0.0235 | 0.2307 | 92 |
| 0.0006 | 0.0362 | 0.0020 | 0.7086 | 0.0235 | 0.2309 | 93 |
| 0.0005 | 0.0362 | 0.0020 | 0.7116 | 0.0235 | 0.2318 | 94 |
| 0.0005 | 0.0362 | 0.0018 | 0.7151 | 0.0235 | 0.2305 | 95 |
| 0.0111 | 0.0362 | 0.2014 | 0.7185 | 0.0234 | 0.2861 | 96 |
| 0.0069 | 0.0362 | 0.0051 | 0.7036 | 0.0235 | 0.2337 | 97 |
| 0.0028 | 0.0362 | 0.0015 | 0.6946 | 0.0235 | 0.2324 | 98 |
| 0.0023 | 0.0362 | 0.0018 | 0.6937 | 0.0235 | 0.2295 | 99 |
| 0.0017 | 0.0362 | 0.0013 | 0.6886 | 0.0235 | 0.2283 | 100 |
| 0.0010 | 0.0362 | 0.0008 | 0.6891 | 0.0236 | 0.2274 | 101 |
| 0.0009 | 0.0362 | 0.0013 | 0.6901 | 0.0236 | 0.2275 | 102 |
| 0.0008 | 0.0362 | 0.0015 | 0.6922 | 0.0236 | 0.2273 | 103 |
| 0.0006 | 0.0362 | 0.0015 | 0.6923 | 0.0236 | 0.2274 | 104 |
| 0.0008 | 0.0362 | 0.0014 | 0.6996 | 0.0235 | 0.2288 | 105 |
| 0.0006 | 0.0362 | 0.0014 | 0.6967 | 0.0236 | 0.2266 | 106 |
| 0.0005 | 0.0362 | 0.0013 | 0.6988 | 0.0236 | 0.2260 | 107 |
| 0.0004 | 0.0362 | 0.0027 | 0.7008 | 0.0236 | 0.2278 | 108 |
| 0.0004 | 0.0362 | 0.0017 | 0.7034 | 0.0236 | 0.2261 | 109 |
| 0.0004 | 0.0362 | 0.0018 | 0.7036 | 0.0236 | 0.2265 | 110 |
| 0.0004 | 0.0362 | 0.0015 | 0.7090 | 0.0236 | 0.2255 | 111 |
| 0.0112 | 0.0362 | 0.0059 | 0.7014 | 0.0235 | 0.2271 | 112 |
| 0.0034 | 0.0362 | 0.0023 | 0.6869 | 0.0236 | 0.2252 | 113 |
| 0.0015 | 0.0362 | 0.0015 | 0.6863 | 0.0236 | 0.2234 | 114 |
| 0.0008 | 0.0362 | 0.0010 | 0.6893 | 0.0236 | 0.2227 | 115 |
| 0.0006 | 0.0362 | 0.0011 | 0.6911 | 0.0236 | 0.2232 | 116 |
| 0.0005 | 0.0362 | 0.0009 | 0.6923 | 0.0236 | 0.2227 | 117 |
| 0.0004 | 0.0362 | 0.0009 | 0.6938 | 0.0236 | 0.2225 | 118 |
| 0.0004 | 0.0362 | 0.0010 | 0.6958 | 0.0236 | 0.2226 | 119 |
| 0.0003 | 0.0362 | 0.0010 | 0.6966 | 0.0236 | 0.2226 | 120 |
| 0.0003 | 0.0362 | 0.0010 | 0.6983 | 0.0236 | 0.2230 | 121 |
| 0.0003 | 0.0362 | 0.0010 | 0.7005 | 0.0236 | 0.2229 | 122 |
| 0.0003 | 0.0362 | 0.0010 | 0.7022 | 0.0236 | 0.2233 | 123 |
| 0.0002 | 0.0362 | 0.0010 | 0.7041 | 0.0236 | 0.2226 | 124 |
| 0.0002 | 0.0362 | 0.0011 | 0.7065 | 0.0236 | 0.2228 | 125 |
| 0.0002 | 0.0362 | 0.0011 | 0.7081 | 0.0236 | 0.2227 | 126 |
| 0.0002 | 0.0362 | 0.0011 | 0.7101 | 0.0236 | 0.2224 | 127 |
| 0.0002 | 0.0362 | 0.0011 | 0.7130 | 0.0236 | 0.2224 | 128 |
| 0.0002 | 0.0362 | 0.0011 | 0.7157 | 0.0236 | 0.2229 | 129 |
| 0.0002 | 0.0362 | 0.0011 | 0.7183 | 0.0236 | 0.2225 | 130 |
| 0.0001 | 0.0362 | 0.0011 | 0.7212 | 0.0236 | 0.2230 | 131 |
| 0.0001 | 0.0362 | 0.0012 | 0.7250 | 0.0236 | 0.2230 | 132 |
| 0.0001 | 0.0362 | 0.0012 | 0.7268 | 0.0236 | 0.2229 | 133 |
| 0.0001 | 0.0362 | 0.0011 | 0.7303 | 0.0236 | 0.2229 | 134 |
| 0.0001 | 0.0362 | 0.0012 | 0.7350 | 0.0236 | 0.2236 | 135 |
| 0.0001 | 0.0362 | 0.0012 | 0.7386 | 0.0236 | 0.2240 | 136 |
| 0.0001 | 0.0362 | 0.0012 | 0.7422 | 0.0236 | 0.2231 | 137 |
| 0.0001 | 0.0362 | 0.0013 | 0.7445 | 0.0236 | 0.2236 | 138 |
| 0.0001 | 0.0362 | 0.0012 | 0.7500 | 0.0236 | 0.2243 | 139 |
| 0.0112 | 0.0361 | 0.0117 | 0.7391 | 0.0235 | 0.2370 | 140 |
| 0.0036 | 0.0362 | 0.0041 | 0.7201 | 0.0236 | 0.2277 | 141 |
| 0.0011 | 0.0362 | 0.0032 | 0.7210 | 0.0236 | 0.2243 | 142 |
| 0.0006 | 0.0362 | 0.0030 | 0.7199 | 0.0236 | 0.2269 | 143 |
| 0.0003 | 0.0362 | 0.0019 | 0.7231 | 0.0236 | 0.2254 | 144 |
| 0.0002 | 0.0362 | 0.0021 | 0.7179 | 0.0236 | 0.2228 | 145 |
| 0.0002 | 0.0362 | 0.0020 | 0.7236 | 0.0236 | 0.2234 | 146 |
| 0.0002 | 0.0362 | 0.0021 | 0.7271 | 0.0236 | 0.2254 | 147 |
| 0.0002 | 0.0362 | 0.0022 | 0.7250 | 0.0236 | 0.2233 | 148 |
| 0.0001 | 0.0362 | 0.0021 | 0.7255 | 0.0236 | 0.2230 | 149 |
| 0.0001 | 0.0362 | 0.0020 | 0.7263 | 0.0236 | 0.2228 | 150 |
| 0.0001 | 0.0362 | 0.0021 | 0.7278 | 0.0236 | 0.2226 | 151 |
| 0.0001 | 0.0362 | 0.0021 | 0.7289 | 0.0237 | 0.2220 | 152 |
| 0.0001 | 0.0362 | 0.0020 | 0.7301 | 0.0237 | 0.2214 | 153 |
| 0.0001 | 0.0362 | 0.0020 | 0.7307 | 0.0237 | 0.2216 | 154 |
| 0.0001 | 0.0362 | 0.0020 | 0.7329 | 0.0237 | 0.2217 | 155 |
| 0.0001 | 0.0362 | 0.0020 | 0.7339 | 0.0237 | 0.2211 | 156 |
| 0.0001 | 0.0362 | 0.0020 | 0.7354 | 0.0237 | 0.2210 | 157 |
| 0.0001 | 0.0362 | 0.0020 | 0.7374 | 0.0237 | 0.2207 | 158 |
| 0.0001 | 0.0362 | 0.0020 | 0.7394 | 0.0237 | 0.2211 | 159 |
| 0.0001 | 0.0362 | 0.0020 | 0.7406 | 0.0237 | 0.2212 | 160 |
| 0.0001 | 0.0362 | 0.0021 | 0.7422 | 0.0237 | 0.2213 | 161 |
| 0.0001 | 0.0362 | 0.0020 | 0.7446 | 0.0237 | 0.2207 | 162 |
| 0.0001 | 0.0362 | 0.0020 | 0.7471 | 0.0237 | 0.2209 | 163 |
| 0.0000 | 0.0362 | 0.0020 | 0.7502 | 0.0237 | 0.2206 | 164 |
### Framework versions
- Transformers 4.33.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
dkqjrm/20230826064921
|
dkqjrm
| 2023-08-25T23:31:50Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-25T21:49:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826064921'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826064921
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2753
- Accuracy: 0.71
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.3969 | 0.6 |
| No log | 2.0 | 50 | 0.4709 | 0.5 |
| No log | 3.0 | 75 | 0.3341 | 0.42 |
| No log | 4.0 | 100 | 0.3011 | 0.54 |
| No log | 5.0 | 125 | 0.3119 | 0.36 |
| No log | 6.0 | 150 | 0.3297 | 0.37 |
| No log | 7.0 | 175 | 0.2928 | 0.53 |
| No log | 8.0 | 200 | 0.3079 | 0.63 |
| No log | 9.0 | 225 | 0.2875 | 0.61 |
| No log | 10.0 | 250 | 0.2906 | 0.54 |
| No log | 11.0 | 275 | 0.2904 | 0.62 |
| No log | 12.0 | 300 | 0.2946 | 0.52 |
| No log | 13.0 | 325 | 0.2942 | 0.51 |
| No log | 14.0 | 350 | 0.2935 | 0.56 |
| No log | 15.0 | 375 | 0.2913 | 0.58 |
| No log | 16.0 | 400 | 0.2886 | 0.6 |
| No log | 17.0 | 425 | 0.2900 | 0.6 |
| No log | 18.0 | 450 | 0.2874 | 0.59 |
| No log | 19.0 | 475 | 0.2910 | 0.6 |
| 0.6674 | 20.0 | 500 | 0.2931 | 0.47 |
| 0.6674 | 21.0 | 525 | 0.2909 | 0.51 |
| 0.6674 | 22.0 | 550 | 0.2855 | 0.62 |
| 0.6674 | 23.0 | 575 | 0.2881 | 0.61 |
| 0.6674 | 24.0 | 600 | 0.2878 | 0.6 |
| 0.6674 | 25.0 | 625 | 0.2874 | 0.57 |
| 0.6674 | 26.0 | 650 | 0.2857 | 0.54 |
| 0.6674 | 27.0 | 675 | 0.2871 | 0.6 |
| 0.6674 | 28.0 | 700 | 0.2864 | 0.59 |
| 0.6674 | 29.0 | 725 | 0.2862 | 0.62 |
| 0.6674 | 30.0 | 750 | 0.2866 | 0.58 |
| 0.6674 | 31.0 | 775 | 0.2837 | 0.63 |
| 0.6674 | 32.0 | 800 | 0.2859 | 0.58 |
| 0.6674 | 33.0 | 825 | 0.2841 | 0.59 |
| 0.6674 | 34.0 | 850 | 0.2878 | 0.62 |
| 0.6674 | 35.0 | 875 | 0.2889 | 0.61 |
| 0.6674 | 36.0 | 900 | 0.2830 | 0.59 |
| 0.6674 | 37.0 | 925 | 0.2824 | 0.59 |
| 0.6674 | 38.0 | 950 | 0.2801 | 0.63 |
| 0.6674 | 39.0 | 975 | 0.2931 | 0.65 |
| 0.5477 | 40.0 | 1000 | 0.2788 | 0.64 |
| 0.5477 | 41.0 | 1025 | 0.2892 | 0.63 |
| 0.5477 | 42.0 | 1050 | 0.2937 | 0.58 |
| 0.5477 | 43.0 | 1075 | 0.2886 | 0.66 |
| 0.5477 | 44.0 | 1100 | 0.2842 | 0.62 |
| 0.5477 | 45.0 | 1125 | 0.2857 | 0.6 |
| 0.5477 | 46.0 | 1150 | 0.2834 | 0.62 |
| 0.5477 | 47.0 | 1175 | 0.2824 | 0.56 |
| 0.5477 | 48.0 | 1200 | 0.2866 | 0.65 |
| 0.5477 | 49.0 | 1225 | 0.2801 | 0.63 |
| 0.5477 | 50.0 | 1250 | 0.2851 | 0.62 |
| 0.5477 | 51.0 | 1275 | 0.2829 | 0.6 |
| 0.5477 | 52.0 | 1300 | 0.2900 | 0.59 |
| 0.5477 | 53.0 | 1325 | 0.2782 | 0.59 |
| 0.5477 | 54.0 | 1350 | 0.2793 | 0.59 |
| 0.5477 | 55.0 | 1375 | 0.2809 | 0.6 |
| 0.5477 | 56.0 | 1400 | 0.2815 | 0.64 |
| 0.5477 | 57.0 | 1425 | 0.2798 | 0.68 |
| 0.5477 | 58.0 | 1450 | 0.2831 | 0.67 |
| 0.5477 | 59.0 | 1475 | 0.2795 | 0.66 |
| 0.4601 | 60.0 | 1500 | 0.2747 | 0.68 |
| 0.4601 | 61.0 | 1525 | 0.2725 | 0.73 |
| 0.4601 | 62.0 | 1550 | 0.2840 | 0.66 |
| 0.4601 | 63.0 | 1575 | 0.2739 | 0.67 |
| 0.4601 | 64.0 | 1600 | 0.2796 | 0.69 |
| 0.4601 | 65.0 | 1625 | 0.2782 | 0.65 |
| 0.4601 | 66.0 | 1650 | 0.2757 | 0.7 |
| 0.4601 | 67.0 | 1675 | 0.2759 | 0.69 |
| 0.4601 | 68.0 | 1700 | 0.2779 | 0.67 |
| 0.4601 | 69.0 | 1725 | 0.2822 | 0.67 |
| 0.4601 | 70.0 | 1750 | 0.2813 | 0.65 |
| 0.4601 | 71.0 | 1775 | 0.2818 | 0.68 |
| 0.4601 | 72.0 | 1800 | 0.2865 | 0.69 |
| 0.4601 | 73.0 | 1825 | 0.2770 | 0.71 |
| 0.4601 | 74.0 | 1850 | 0.2822 | 0.69 |
| 0.4601 | 75.0 | 1875 | 0.2783 | 0.71 |
| 0.4601 | 76.0 | 1900 | 0.2764 | 0.71 |
| 0.4601 | 77.0 | 1925 | 0.2772 | 0.69 |
| 0.4601 | 78.0 | 1950 | 0.2759 | 0.7 |
| 0.4601 | 79.0 | 1975 | 0.2751 | 0.72 |
| 0.4329 | 80.0 | 2000 | 0.2753 | 0.71 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dkqjrm/20230826065621
|
dkqjrm
| 2023-08-25T23:18:21Z | 61 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:super_glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-08-25T21:56:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: '20230826065621'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20230826065621
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6391
- Accuracy: 0.67
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 16
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 0.9872 | 0.34 |
| No log | 2.0 | 50 | 0.8547 | 0.59 |
| No log | 3.0 | 75 | 0.6062 | 0.64 |
| No log | 4.0 | 100 | 0.6097 | 0.61 |
| No log | 5.0 | 125 | 0.6064 | 0.62 |
| No log | 6.0 | 150 | 0.5974 | 0.63 |
| No log | 7.0 | 175 | 0.5723 | 0.66 |
| No log | 8.0 | 200 | 0.6179 | 0.63 |
| No log | 9.0 | 225 | 0.5842 | 0.62 |
| No log | 10.0 | 250 | 0.6117 | 0.68 |
| No log | 11.0 | 275 | 0.5444 | 0.64 |
| No log | 12.0 | 300 | 0.7898 | 0.68 |
| No log | 13.0 | 325 | 0.6851 | 0.68 |
| No log | 14.0 | 350 | 0.7716 | 0.69 |
| No log | 15.0 | 375 | 0.6750 | 0.71 |
| No log | 16.0 | 400 | 0.7645 | 0.7 |
| No log | 17.0 | 425 | 0.7338 | 0.7 |
| No log | 18.0 | 450 | 0.8156 | 0.66 |
| No log | 19.0 | 475 | 0.7524 | 0.68 |
| 0.7431 | 20.0 | 500 | 0.8516 | 0.65 |
| 0.7431 | 21.0 | 525 | 0.8224 | 0.65 |
| 0.7431 | 22.0 | 550 | 1.0607 | 0.67 |
| 0.7431 | 23.0 | 575 | 0.8977 | 0.66 |
| 0.7431 | 24.0 | 600 | 0.7860 | 0.66 |
| 0.7431 | 25.0 | 625 | 0.7285 | 0.66 |
| 0.7431 | 26.0 | 650 | 0.7097 | 0.64 |
| 0.7431 | 27.0 | 675 | 0.7292 | 0.64 |
| 0.7431 | 28.0 | 700 | 0.7131 | 0.65 |
| 0.7431 | 29.0 | 725 | 0.8039 | 0.65 |
| 0.7431 | 30.0 | 750 | 0.7988 | 0.65 |
| 0.7431 | 31.0 | 775 | 0.7809 | 0.64 |
| 0.7431 | 32.0 | 800 | 0.7544 | 0.64 |
| 0.7431 | 33.0 | 825 | 0.7492 | 0.62 |
| 0.7431 | 34.0 | 850 | 0.8206 | 0.64 |
| 0.7431 | 35.0 | 875 | 0.6409 | 0.66 |
| 0.7431 | 36.0 | 900 | 0.7144 | 0.63 |
| 0.7431 | 37.0 | 925 | 0.7414 | 0.63 |
| 0.7431 | 38.0 | 950 | 0.7423 | 0.65 |
| 0.7431 | 39.0 | 975 | 0.7766 | 0.65 |
| 0.3363 | 40.0 | 1000 | 0.7182 | 0.67 |
| 0.3363 | 41.0 | 1025 | 0.7375 | 0.67 |
| 0.3363 | 42.0 | 1050 | 0.7236 | 0.67 |
| 0.3363 | 43.0 | 1075 | 0.7218 | 0.66 |
| 0.3363 | 44.0 | 1100 | 0.7324 | 0.67 |
| 0.3363 | 45.0 | 1125 | 0.7291 | 0.67 |
| 0.3363 | 46.0 | 1150 | 0.6803 | 0.67 |
| 0.3363 | 47.0 | 1175 | 0.6637 | 0.67 |
| 0.3363 | 48.0 | 1200 | 0.7064 | 0.65 |
| 0.3363 | 49.0 | 1225 | 0.6534 | 0.65 |
| 0.3363 | 50.0 | 1250 | 0.7230 | 0.67 |
| 0.3363 | 51.0 | 1275 | 0.7338 | 0.65 |
| 0.3363 | 52.0 | 1300 | 0.6495 | 0.62 |
| 0.3363 | 53.0 | 1325 | 0.6540 | 0.63 |
| 0.3363 | 54.0 | 1350 | 0.6994 | 0.62 |
| 0.3363 | 55.0 | 1375 | 0.7040 | 0.63 |
| 0.3363 | 56.0 | 1400 | 0.6775 | 0.63 |
| 0.3363 | 57.0 | 1425 | 0.6425 | 0.65 |
| 0.3363 | 58.0 | 1450 | 0.6424 | 0.66 |
| 0.3363 | 59.0 | 1475 | 0.6782 | 0.66 |
| 0.2375 | 60.0 | 1500 | 0.6770 | 0.68 |
| 0.2375 | 61.0 | 1525 | 0.7029 | 0.68 |
| 0.2375 | 62.0 | 1550 | 0.6824 | 0.68 |
| 0.2375 | 63.0 | 1575 | 0.6847 | 0.68 |
| 0.2375 | 64.0 | 1600 | 0.6767 | 0.68 |
| 0.2375 | 65.0 | 1625 | 0.6362 | 0.67 |
| 0.2375 | 66.0 | 1650 | 0.6292 | 0.67 |
| 0.2375 | 67.0 | 1675 | 0.6470 | 0.67 |
| 0.2375 | 68.0 | 1700 | 0.6661 | 0.67 |
| 0.2375 | 69.0 | 1725 | 0.6305 | 0.67 |
| 0.2375 | 70.0 | 1750 | 0.6492 | 0.67 |
| 0.2375 | 71.0 | 1775 | 0.6525 | 0.67 |
| 0.2375 | 72.0 | 1800 | 0.6339 | 0.67 |
| 0.2375 | 73.0 | 1825 | 0.6621 | 0.67 |
| 0.2375 | 74.0 | 1850 | 0.6562 | 0.67 |
| 0.2375 | 75.0 | 1875 | 0.6397 | 0.67 |
| 0.2375 | 76.0 | 1900 | 0.6496 | 0.67 |
| 0.2375 | 77.0 | 1925 | 0.6402 | 0.67 |
| 0.2375 | 78.0 | 1950 | 0.6382 | 0.67 |
| 0.2375 | 79.0 | 1975 | 0.6407 | 0.67 |
| 0.2102 | 80.0 | 2000 | 0.6391 | 0.67 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.