modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-30 06:27:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 527
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-30 06:27:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
hkivancoral/hushem_40x_deit_small_rms_0001_fold2
|
hkivancoral
| 2023-12-25T21:02:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T20:46:19Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_rms_0001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_0001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5928
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0578 | 1.0 | 215 | 1.1579 | 0.7333 |
| 0.1017 | 2.0 | 430 | 1.7859 | 0.7556 |
| 0.022 | 3.0 | 645 | 1.6749 | 0.8 |
| 0.0643 | 4.0 | 860 | 2.1460 | 0.6889 |
| 0.0005 | 5.0 | 1075 | 1.2973 | 0.7778 |
| 0.0002 | 6.0 | 1290 | 1.6108 | 0.7778 |
| 0.0002 | 7.0 | 1505 | 1.9441 | 0.7556 |
| 0.0 | 8.0 | 1720 | 2.1424 | 0.7778 |
| 0.0 | 9.0 | 1935 | 2.2105 | 0.8 |
| 0.0 | 10.0 | 2150 | 2.3105 | 0.8 |
| 0.0 | 11.0 | 2365 | 2.4406 | 0.8 |
| 0.0 | 12.0 | 2580 | 2.5849 | 0.8 |
| 0.0 | 13.0 | 2795 | 2.7379 | 0.8 |
| 0.0 | 14.0 | 3010 | 2.8751 | 0.8 |
| 0.0 | 15.0 | 3225 | 2.9942 | 0.8 |
| 0.0 | 16.0 | 3440 | 3.0983 | 0.8 |
| 0.0 | 17.0 | 3655 | 3.1877 | 0.8 |
| 0.0 | 18.0 | 3870 | 3.2698 | 0.8 |
| 0.0 | 19.0 | 4085 | 3.3376 | 0.8 |
| 0.0 | 20.0 | 4300 | 3.3925 | 0.8 |
| 0.0 | 21.0 | 4515 | 3.4335 | 0.8 |
| 0.0 | 22.0 | 4730 | 3.4638 | 0.8 |
| 0.0 | 23.0 | 4945 | 3.4866 | 0.8 |
| 0.0 | 24.0 | 5160 | 3.5041 | 0.8 |
| 0.0 | 25.0 | 5375 | 3.5181 | 0.8 |
| 0.0 | 26.0 | 5590 | 3.5294 | 0.8 |
| 0.0 | 27.0 | 5805 | 3.5388 | 0.8 |
| 0.0 | 28.0 | 6020 | 3.5464 | 0.8 |
| 0.0 | 29.0 | 6235 | 3.5531 | 0.8 |
| 0.0 | 30.0 | 6450 | 3.5587 | 0.8 |
| 0.0 | 31.0 | 6665 | 3.5636 | 0.8 |
| 0.0 | 32.0 | 6880 | 3.5677 | 0.8 |
| 0.0 | 33.0 | 7095 | 3.5714 | 0.8 |
| 0.0 | 34.0 | 7310 | 3.5745 | 0.8 |
| 0.0 | 35.0 | 7525 | 3.5772 | 0.8 |
| 0.0 | 36.0 | 7740 | 3.5795 | 0.8 |
| 0.0 | 37.0 | 7955 | 3.5816 | 0.8 |
| 0.0 | 38.0 | 8170 | 3.5833 | 0.8 |
| 0.0 | 39.0 | 8385 | 3.5849 | 0.8 |
| 0.0 | 40.0 | 8600 | 3.5863 | 0.8 |
| 0.0 | 41.0 | 8815 | 3.5875 | 0.8 |
| 0.0 | 42.0 | 9030 | 3.5885 | 0.8 |
| 0.0 | 43.0 | 9245 | 3.5895 | 0.8 |
| 0.0 | 44.0 | 9460 | 3.5903 | 0.8 |
| 0.0 | 45.0 | 9675 | 3.5910 | 0.8 |
| 0.0 | 46.0 | 9890 | 3.5915 | 0.8 |
| 0.0 | 47.0 | 10105 | 3.5920 | 0.8 |
| 0.0 | 48.0 | 10320 | 3.5924 | 0.8 |
| 0.0 | 49.0 | 10535 | 3.5927 | 0.8 |
| 0.0 | 50.0 | 10750 | 3.5928 | 0.8 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_small_rms_00001_fold1
|
hkivancoral
| 2023-12-25T20:46:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T16:10:36Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_rms_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_00001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9061
- Accuracy: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0132 | 1.0 | 215 | 0.4686 | 0.8444 |
| 0.0004 | 2.0 | 430 | 0.6106 | 0.8222 |
| 0.0016 | 3.0 | 645 | 0.7608 | 0.8 |
| 0.0 | 4.0 | 860 | 0.5588 | 0.8667 |
| 0.0 | 5.0 | 1075 | 0.5395 | 0.8667 |
| 0.0 | 6.0 | 1290 | 0.5368 | 0.8889 |
| 0.0 | 7.0 | 1505 | 0.5575 | 0.8889 |
| 0.0 | 8.0 | 1720 | 0.5516 | 0.9111 |
| 0.0 | 9.0 | 1935 | 0.5817 | 0.9111 |
| 0.0 | 10.0 | 2150 | 0.5914 | 0.8667 |
| 0.0 | 11.0 | 2365 | 0.6168 | 0.8667 |
| 0.0 | 12.0 | 2580 | 0.7197 | 0.8667 |
| 0.0 | 13.0 | 2795 | 0.7066 | 0.8667 |
| 0.0 | 14.0 | 3010 | 0.7905 | 0.8667 |
| 0.0 | 15.0 | 3225 | 0.8099 | 0.8667 |
| 0.0 | 16.0 | 3440 | 0.9402 | 0.8444 |
| 0.0 | 17.0 | 3655 | 0.9239 | 0.8667 |
| 0.0 | 18.0 | 3870 | 0.9014 | 0.8444 |
| 0.0 | 19.0 | 4085 | 0.9346 | 0.8667 |
| 0.0 | 20.0 | 4300 | 0.8551 | 0.8667 |
| 0.0 | 21.0 | 4515 | 0.8933 | 0.8667 |
| 0.0 | 22.0 | 4730 | 0.9137 | 0.8667 |
| 0.0 | 23.0 | 4945 | 0.9179 | 0.8667 |
| 0.0 | 24.0 | 5160 | 0.8411 | 0.8667 |
| 0.0 | 25.0 | 5375 | 0.9276 | 0.8667 |
| 0.0 | 26.0 | 5590 | 0.9081 | 0.8667 |
| 0.0 | 27.0 | 5805 | 0.9378 | 0.8667 |
| 0.0 | 28.0 | 6020 | 0.9015 | 0.8667 |
| 0.0 | 29.0 | 6235 | 0.8989 | 0.8667 |
| 0.0 | 30.0 | 6450 | 0.9223 | 0.8667 |
| 0.0 | 31.0 | 6665 | 0.9424 | 0.8667 |
| 0.0 | 32.0 | 6880 | 0.9057 | 0.8667 |
| 0.0 | 33.0 | 7095 | 0.8894 | 0.8667 |
| 0.0 | 34.0 | 7310 | 0.9300 | 0.8667 |
| 0.0 | 35.0 | 7525 | 0.9491 | 0.8667 |
| 0.0 | 36.0 | 7740 | 0.8980 | 0.8667 |
| 0.0 | 37.0 | 7955 | 0.8706 | 0.8667 |
| 0.0 | 38.0 | 8170 | 0.8943 | 0.8667 |
| 0.0 | 39.0 | 8385 | 0.9073 | 0.8667 |
| 0.0 | 40.0 | 8600 | 0.9075 | 0.8667 |
| 0.0 | 41.0 | 8815 | 0.9113 | 0.8667 |
| 0.0 | 42.0 | 9030 | 0.9138 | 0.8667 |
| 0.0 | 43.0 | 9245 | 0.9218 | 0.8667 |
| 0.0 | 44.0 | 9460 | 0.9089 | 0.8667 |
| 0.0 | 45.0 | 9675 | 0.9120 | 0.8667 |
| 0.0 | 46.0 | 9890 | 0.9019 | 0.8667 |
| 0.0 | 47.0 | 10105 | 0.9058 | 0.8667 |
| 0.0 | 48.0 | 10320 | 0.9063 | 0.8667 |
| 0.0 | 49.0 | 10535 | 0.9035 | 0.8667 |
| 0.0 | 50.0 | 10750 | 0.9061 | 0.8667 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_small_rms_0001_fold1
|
hkivancoral
| 2023-12-25T20:46:12Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T20:30:28Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_rms_0001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7555555555555555
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_0001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2959
- Accuracy: 0.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1164 | 1.0 | 215 | 0.6806 | 0.8444 |
| 0.1473 | 2.0 | 430 | 1.6512 | 0.6889 |
| 0.0323 | 3.0 | 645 | 2.5012 | 0.5556 |
| 0.0026 | 4.0 | 860 | 1.3440 | 0.7333 |
| 0.0644 | 5.0 | 1075 | 1.9037 | 0.6667 |
| 0.0218 | 6.0 | 1290 | 1.1429 | 0.7778 |
| 0.0001 | 7.0 | 1505 | 1.3004 | 0.7778 |
| 0.0 | 8.0 | 1720 | 1.5783 | 0.8 |
| 0.0 | 9.0 | 1935 | 1.6151 | 0.8 |
| 0.0 | 10.0 | 2150 | 1.7171 | 0.7778 |
| 0.0 | 11.0 | 2365 | 1.8524 | 0.7778 |
| 0.0 | 12.0 | 2580 | 2.0103 | 0.7778 |
| 0.0 | 13.0 | 2795 | 2.1601 | 0.7778 |
| 0.0 | 14.0 | 3010 | 2.3193 | 0.7778 |
| 0.0 | 15.0 | 3225 | 2.4911 | 0.7556 |
| 0.0 | 16.0 | 3440 | 2.6216 | 0.7556 |
| 0.0 | 17.0 | 3655 | 2.7129 | 0.7556 |
| 0.0 | 18.0 | 3870 | 2.8038 | 0.7556 |
| 0.0 | 19.0 | 4085 | 2.8933 | 0.7556 |
| 0.0 | 20.0 | 4300 | 2.9673 | 0.7556 |
| 0.0 | 21.0 | 4515 | 3.0230 | 0.7556 |
| 0.0 | 22.0 | 4730 | 3.0642 | 0.7556 |
| 0.0 | 23.0 | 4945 | 3.0970 | 0.7556 |
| 0.0 | 24.0 | 5160 | 3.1238 | 0.7556 |
| 0.0 | 25.0 | 5375 | 3.1458 | 0.7556 |
| 0.0 | 26.0 | 5590 | 3.1648 | 0.7556 |
| 0.0 | 27.0 | 5805 | 3.1810 | 0.7556 |
| 0.0 | 28.0 | 6020 | 3.1953 | 0.7556 |
| 0.0 | 29.0 | 6235 | 3.2081 | 0.7556 |
| 0.0 | 30.0 | 6450 | 3.2189 | 0.7556 |
| 0.0 | 31.0 | 6665 | 3.2288 | 0.7556 |
| 0.0 | 32.0 | 6880 | 3.2374 | 0.7556 |
| 0.0 | 33.0 | 7095 | 3.2451 | 0.7556 |
| 0.0 | 34.0 | 7310 | 3.2520 | 0.7556 |
| 0.0 | 35.0 | 7525 | 3.2584 | 0.7556 |
| 0.0 | 36.0 | 7740 | 3.2638 | 0.7556 |
| 0.0 | 37.0 | 7955 | 3.2687 | 0.7556 |
| 0.0 | 38.0 | 8170 | 3.2732 | 0.7556 |
| 0.0 | 39.0 | 8385 | 3.2771 | 0.7556 |
| 0.0 | 40.0 | 8600 | 3.2806 | 0.7556 |
| 0.0 | 41.0 | 8815 | 3.2837 | 0.7556 |
| 0.0 | 42.0 | 9030 | 3.2863 | 0.7556 |
| 0.0 | 43.0 | 9245 | 3.2887 | 0.7556 |
| 0.0 | 44.0 | 9460 | 3.2906 | 0.7556 |
| 0.0 | 45.0 | 9675 | 3.2923 | 0.7556 |
| 0.0 | 46.0 | 9890 | 3.2937 | 0.7556 |
| 0.0 | 47.0 | 10105 | 3.2947 | 0.7556 |
| 0.0 | 48.0 | 10320 | 3.2954 | 0.7556 |
| 0.0 | 49.0 | 10535 | 3.2958 | 0.7556 |
| 0.0 | 50.0 | 10750 | 3.2959 | 0.7556 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
mikedad/Reinforce-PixelCopter
|
mikedad
| 2023-12-25T20:44:30Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-25T15:08:07Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 62.90 +/- 49.70
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LoneStriker/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-8.0bpw-h8-exl2
|
LoneStriker
| 2023-12-25T20:28:59Z | 4 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-25T20:26:51Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| **TinyLlama-1.1B-intermediate-step-1195k-token-2.5T** | **2.5T** | **58.96** | **34.40** | **58.72** | **31.91** | **56.78** | **63.21** | **73.07** | **53.86**|
|
LoneStriker/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-5.0bpw-h6-exl2
|
LoneStriker
| 2023-12-25T20:28:27Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-25T20:23:46Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| **TinyLlama-1.1B-intermediate-step-1195k-token-2.5T** | **2.5T** | **58.96** | **34.40** | **58.72** | **31.91** | **56.78** | **63.21** | **73.07** | **53.86**|
|
LoneStriker/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-4.0bpw-h6-exl2
|
LoneStriker
| 2023-12-25T20:28:26Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-25T20:22:15Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| **TinyLlama-1.1B-intermediate-step-1195k-token-2.5T** | **2.5T** | **58.96** | **34.40** | **58.72** | **31.91** | **56.78** | **63.21** | **73.07** | **53.86**|
|
LoneStriker/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-3.0bpw-h6-exl2
|
LoneStriker
| 2023-12-25T20:28:26Z | 6 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-25T20:20:47Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
language:
- en
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
<div align="center">
<img src="./TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Collection
This collection contains all checkpoints after the 1T fix. Branch name indicates the step and number of tokens seen.
#### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|-------------------------------------------|-----------------|-----------|------|------------|-------|-------|-------|------|-----|
| Pythia-1.0B | 300B | 47.16 | 31.40| 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-50K-104b | 103B | 43.50 | 29.80| 53.28 | 24.32 | 44.91 | 59.66 | 67.30 | 46.11|
| TinyLlama-1.1B-intermediate-step-240k-503b| 503B | 49.56 |31.40 |55.80 |26.54 |48.32 |56.91 |69.42 | 48.28 |
| TinyLlama-1.1B-intermediate-step-480k-1007B | 1007B | 52.54 | 33.40 | 55.96 | 27.82 | 52.36 | 59.54 | 69.91 | 50.22 |
| TinyLlama-1.1B-intermediate-step-715k-1.5T | 1.5T | 53.68 | 35.20 | 58.33 | 29.18 | 51.89 | 59.08 | 71.65 | 51.29 |
| TinyLlama-1.1B-intermediate-step-955k-2T | 2T | 54.63 | 33.40 | 56.83 | 28.07 | 54.67 | 63.21 | 70.67 | 51.64 |
| **TinyLlama-1.1B-intermediate-step-1195k-token-2.5T** | **2.5T** | **58.96** | **34.40** | **58.72** | **31.91** | **56.78** | **63.21** | **73.07** | **53.86**|
|
c-wang/rl_course_vizdoom_health_gathering_supreme
|
c-wang
| 2023-12-25T20:27:38Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-25T20:27:27Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.98 +/- 3.92
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r c-wang/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
kk497055/ppo-lunar-lander-v2
|
kk497055
| 2023-12-25T20:26:21Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-25T18:46:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.64 +/- 20.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
andrewatef/RewriterV0.10
|
andrewatef
| 2023-12-25T20:15:27Z | 5 | 0 |
peft
|
[
"peft",
"pytorch",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:unsloth/llama-2-7b",
"base_model:adapter:unsloth/llama-2-7b",
"region:us"
] | null | 2023-12-25T19:38:16Z |
---
library_name: peft
base_model: unsloth/llama-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Mihaiii/Pallas-0.4
|
Mihaiii
| 2023-12-25T19:48:41Z | 21 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:migtissera/Tess-34B-v1.4",
"base_model:finetune:migtissera/Tess-34B-v1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-12-11T13:39:52Z |
---
base_model: migtissera/Tess-34B-v1.4
inference: false
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
metrics:
- accuracy
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
An instruct based fine tune of [migtissera/Tess-34B-v1.4](https://huggingface.co/migtissera/Tess-34B-v1.4).
It works well with long system prompts.
It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension.
This model is trained on a private dataset. The high GSM8K score is **NOT** because of the MetaMath dataset.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
ntc-ai/SDXL-LoRA-slider.futuristic-logo-design
|
ntc-ai
| 2023-12-25T19:47:37Z | 1,394 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-25T19:47:31Z |
---
language:
- en
thumbnail: "images/evaluate/futuristic logo design.../futuristic logo design_17_3.0.png"
widget:
- text: futuristic logo design
output:
url: images/futuristic logo design_17_3.0.png
- text: futuristic logo design
output:
url: images/futuristic logo design_19_3.0.png
- text: futuristic logo design
output:
url: images/futuristic logo design_20_3.0.png
- text: futuristic logo design
output:
url: images/futuristic logo design_21_3.0.png
- text: futuristic logo design
output:
url: images/futuristic logo design_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "futuristic logo design"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - futuristic logo design (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/futuristic logo design_17_-3.0.png" width=256 height=256 /> | <img src="images/futuristic logo design_17_0.0.png" width=256 height=256 /> | <img src="images/futuristic logo design_17_3.0.png" width=256 height=256 /> |
| <img src="images/futuristic logo design_19_-3.0.png" width=256 height=256 /> | <img src="images/futuristic logo design_19_0.0.png" width=256 height=256 /> | <img src="images/futuristic logo design_19_3.0.png" width=256 height=256 /> |
| <img src="images/futuristic logo design_20_-3.0.png" width=256 height=256 /> | <img src="images/futuristic logo design_20_0.0.png" width=256 height=256 /> | <img src="images/futuristic logo design_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
futuristic logo design
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.futuristic-logo-design', weight_name='futuristic logo design.safetensors', adapter_name="futuristic logo design")
# Activate the LoRA
pipe.set_adapters(["futuristic logo design"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, futuristic logo design"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 620+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
hkivancoral/hushem_40x_deit_small_rms_001_fold4
|
hkivancoral
| 2023-12-25T19:41:21Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T19:25:33Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_rms_001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8571428571428571
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0611
- Accuracy: 0.8571
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2113 | 1.0 | 219 | 1.1711 | 0.4286 |
| 1.0563 | 2.0 | 438 | 1.0188 | 0.5476 |
| 1.1492 | 3.0 | 657 | 1.0982 | 0.5 |
| 0.8355 | 4.0 | 876 | 0.9249 | 0.4524 |
| 0.9277 | 5.0 | 1095 | 0.9715 | 0.4762 |
| 0.9059 | 6.0 | 1314 | 0.8917 | 0.5476 |
| 0.697 | 7.0 | 1533 | 1.5323 | 0.5 |
| 0.7359 | 8.0 | 1752 | 0.7730 | 0.6190 |
| 0.6186 | 9.0 | 1971 | 0.7734 | 0.6190 |
| 0.5814 | 10.0 | 2190 | 0.7874 | 0.7143 |
| 0.5808 | 11.0 | 2409 | 0.5974 | 0.7619 |
| 0.5808 | 12.0 | 2628 | 0.7519 | 0.7381 |
| 0.5465 | 13.0 | 2847 | 0.4863 | 0.8095 |
| 0.4671 | 14.0 | 3066 | 0.6575 | 0.6905 |
| 0.5228 | 15.0 | 3285 | 0.6495 | 0.7143 |
| 0.4327 | 16.0 | 3504 | 0.7075 | 0.7619 |
| 0.3283 | 17.0 | 3723 | 0.6356 | 0.7381 |
| 0.3853 | 18.0 | 3942 | 0.5432 | 0.7857 |
| 0.3803 | 19.0 | 4161 | 0.9396 | 0.7857 |
| 0.3493 | 20.0 | 4380 | 0.8015 | 0.7143 |
| 0.3953 | 21.0 | 4599 | 0.7074 | 0.7619 |
| 0.3223 | 22.0 | 4818 | 1.0523 | 0.6667 |
| 0.2414 | 23.0 | 5037 | 1.0911 | 0.6667 |
| 0.2219 | 24.0 | 5256 | 1.1394 | 0.6905 |
| 0.2892 | 25.0 | 5475 | 0.7116 | 0.7619 |
| 0.2739 | 26.0 | 5694 | 1.1234 | 0.7143 |
| 0.2207 | 27.0 | 5913 | 0.8565 | 0.7857 |
| 0.1354 | 28.0 | 6132 | 1.1975 | 0.7381 |
| 0.2042 | 29.0 | 6351 | 0.8634 | 0.7619 |
| 0.152 | 30.0 | 6570 | 0.8119 | 0.7857 |
| 0.1453 | 31.0 | 6789 | 0.8364 | 0.7381 |
| 0.1714 | 32.0 | 7008 | 1.2193 | 0.8095 |
| 0.1106 | 33.0 | 7227 | 1.0792 | 0.7619 |
| 0.1157 | 34.0 | 7446 | 0.9831 | 0.7619 |
| 0.096 | 35.0 | 7665 | 1.1093 | 0.8095 |
| 0.0452 | 36.0 | 7884 | 0.9133 | 0.8095 |
| 0.0552 | 37.0 | 8103 | 1.3044 | 0.8095 |
| 0.0539 | 38.0 | 8322 | 0.9892 | 0.8095 |
| 0.041 | 39.0 | 8541 | 1.1780 | 0.8571 |
| 0.0165 | 40.0 | 8760 | 1.3517 | 0.8333 |
| 0.0361 | 41.0 | 8979 | 1.5071 | 0.8333 |
| 0.046 | 42.0 | 9198 | 1.2679 | 0.8571 |
| 0.0477 | 43.0 | 9417 | 1.7256 | 0.8333 |
| 0.0088 | 44.0 | 9636 | 1.2515 | 0.8333 |
| 0.0208 | 45.0 | 9855 | 1.8769 | 0.8571 |
| 0.0012 | 46.0 | 10074 | 1.9828 | 0.8333 |
| 0.0014 | 47.0 | 10293 | 1.9685 | 0.8571 |
| 0.0001 | 48.0 | 10512 | 1.7583 | 0.8571 |
| 0.0001 | 49.0 | 10731 | 2.0944 | 0.8571 |
| 0.0001 | 50.0 | 10950 | 2.0611 | 0.8571 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
gjyotin305/finale
|
gjyotin305
| 2023-12-25T19:40:02Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-25T19:36:29Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finale
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finale
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0132
- Accuracy: 0.9967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0729 | 1.0 | 959 | 0.0170 | 0.9953 |
| 0.008 | 2.0 | 1918 | 0.0132 | 0.9967 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.14.1
|
AVIIAX/majicfan2
|
AVIIAX
| 2023-12-25T19:26:13Z | 6 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-25T19:25:26Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/41865/majicmix-fantasy
Original Author's DEMO image :

|
hkivancoral/hushem_40x_deit_small_rms_001_fold3
|
hkivancoral
| 2023-12-25T19:25:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T19:09:38Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_rms_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7441860465116279
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1579
- Accuracy: 0.7442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2419 | 1.0 | 217 | 1.3981 | 0.2791 |
| 1.0235 | 2.0 | 434 | 1.3169 | 0.3953 |
| 0.8369 | 3.0 | 651 | 1.0743 | 0.4884 |
| 0.7963 | 4.0 | 868 | 0.6563 | 0.6977 |
| 0.7399 | 5.0 | 1085 | 1.1403 | 0.4651 |
| 0.591 | 6.0 | 1302 | 0.6390 | 0.7209 |
| 0.4772 | 7.0 | 1519 | 0.8818 | 0.6047 |
| 0.4582 | 8.0 | 1736 | 0.8295 | 0.6744 |
| 0.4273 | 9.0 | 1953 | 1.1233 | 0.4884 |
| 0.3402 | 10.0 | 2170 | 0.8028 | 0.7442 |
| 0.3174 | 11.0 | 2387 | 1.2880 | 0.5581 |
| 0.2909 | 12.0 | 2604 | 1.5844 | 0.6512 |
| 0.2204 | 13.0 | 2821 | 1.1940 | 0.6977 |
| 0.2639 | 14.0 | 3038 | 1.0276 | 0.6279 |
| 0.2085 | 15.0 | 3255 | 1.7122 | 0.6512 |
| 0.1551 | 16.0 | 3472 | 1.0876 | 0.7209 |
| 0.2066 | 17.0 | 3689 | 1.4826 | 0.6279 |
| 0.1259 | 18.0 | 3906 | 1.7194 | 0.6279 |
| 0.1381 | 19.0 | 4123 | 1.1881 | 0.7442 |
| 0.0864 | 20.0 | 4340 | 2.4912 | 0.7209 |
| 0.1059 | 21.0 | 4557 | 1.6650 | 0.6977 |
| 0.0958 | 22.0 | 4774 | 1.6843 | 0.6977 |
| 0.0803 | 23.0 | 4991 | 2.0214 | 0.6279 |
| 0.0716 | 24.0 | 5208 | 2.3668 | 0.6977 |
| 0.0335 | 25.0 | 5425 | 1.8384 | 0.6279 |
| 0.0722 | 26.0 | 5642 | 1.9563 | 0.6744 |
| 0.0543 | 27.0 | 5859 | 2.2739 | 0.6744 |
| 0.024 | 28.0 | 6076 | 1.7616 | 0.6977 |
| 0.0588 | 29.0 | 6293 | 1.9807 | 0.6977 |
| 0.0731 | 30.0 | 6510 | 2.0008 | 0.6279 |
| 0.0315 | 31.0 | 6727 | 2.2264 | 0.7209 |
| 0.0084 | 32.0 | 6944 | 2.2231 | 0.7674 |
| 0.0194 | 33.0 | 7161 | 2.3580 | 0.6977 |
| 0.0559 | 34.0 | 7378 | 2.5423 | 0.7209 |
| 0.0002 | 35.0 | 7595 | 2.6899 | 0.7674 |
| 0.0092 | 36.0 | 7812 | 2.7843 | 0.6744 |
| 0.0002 | 37.0 | 8029 | 2.7034 | 0.7442 |
| 0.016 | 38.0 | 8246 | 2.9844 | 0.7674 |
| 0.0006 | 39.0 | 8463 | 1.9924 | 0.8140 |
| 0.006 | 40.0 | 8680 | 2.8801 | 0.6977 |
| 0.0001 | 41.0 | 8897 | 2.7323 | 0.7674 |
| 0.0001 | 42.0 | 9114 | 3.2030 | 0.6977 |
| 0.0002 | 43.0 | 9331 | 3.6553 | 0.7674 |
| 0.0001 | 44.0 | 9548 | 2.9080 | 0.7209 |
| 0.0001 | 45.0 | 9765 | 2.8393 | 0.7442 |
| 0.0 | 46.0 | 9982 | 2.9525 | 0.7442 |
| 0.0 | 47.0 | 10199 | 3.0057 | 0.7442 |
| 0.0 | 48.0 | 10416 | 3.0880 | 0.7442 |
| 0.0 | 49.0 | 10633 | 3.1339 | 0.7442 |
| 0.0 | 50.0 | 10850 | 3.1579 | 0.7442 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
ybelkada/test-ddpo-tag
|
ybelkada
| 2023-12-25T19:24:30Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"trl",
"ddpo",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-25T19:20:45Z |
---
license: apache-2.0
tags:
- trl
- ddpo
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL DDPO Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
BoccheseGiacomo/phi-2-finetuned-gsm8k-gb
|
BoccheseGiacomo
| 2023-12-25T19:21:50Z | 15 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"phi-msft",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-25T17:57:35Z |
---
license: other
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: phi-2-finetuned-gsm8k-gb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned-gsm8k-gb
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
ybelkada/test-ppo-tag
|
ybelkada
| 2023-12-25T19:08:44Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-12-25T19:08:27Z |
---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="ybelkada//var/tmp/tmpja4s4p3r/ybelkada/test-ppo-tag")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("ybelkada//var/tmp/tmpja4s4p3r/ybelkada/test-ppo-tag")
model = AutoModelForCausalLMWithValueHead.from_pretrained("ybelkada//var/tmp/tmpja4s4p3r/ybelkada/test-ppo-tag")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
hkivancoral/hushem_40x_deit_small_rms_001_fold1
|
hkivancoral
| 2023-12-25T18:53:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T18:38:10Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_rms_001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4222222222222222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_rms_001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2597
- Accuracy: 0.4222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3825 | 1.0 | 215 | 1.4688 | 0.2667 |
| 1.3534 | 2.0 | 430 | 1.4553 | 0.3778 |
| 0.9498 | 3.0 | 645 | 1.8460 | 0.3333 |
| 0.7874 | 4.0 | 860 | 1.0992 | 0.4444 |
| 0.6519 | 5.0 | 1075 | 1.5864 | 0.4222 |
| 0.6238 | 6.0 | 1290 | 1.5678 | 0.4444 |
| 0.6712 | 7.0 | 1505 | 1.5837 | 0.3778 |
| 0.6234 | 8.0 | 1720 | 1.4844 | 0.3778 |
| 0.6842 | 9.0 | 1935 | 1.4360 | 0.4 |
| 0.5244 | 10.0 | 2150 | 1.9225 | 0.3778 |
| 0.5422 | 11.0 | 2365 | 1.4512 | 0.4667 |
| 0.4482 | 12.0 | 2580 | 2.2789 | 0.3556 |
| 0.5899 | 13.0 | 2795 | 1.6124 | 0.4222 |
| 0.4227 | 14.0 | 3010 | 1.8210 | 0.4444 |
| 0.4862 | 15.0 | 3225 | 1.4215 | 0.4667 |
| 0.4615 | 16.0 | 3440 | 2.1496 | 0.3778 |
| 0.6895 | 17.0 | 3655 | 1.7698 | 0.4667 |
| 0.3741 | 18.0 | 3870 | 2.6905 | 0.3556 |
| 0.3762 | 19.0 | 4085 | 2.4546 | 0.4222 |
| 0.3383 | 20.0 | 4300 | 2.0176 | 0.3778 |
| 0.3622 | 21.0 | 4515 | 2.9706 | 0.4 |
| 0.3284 | 22.0 | 4730 | 2.9396 | 0.4 |
| 0.2403 | 23.0 | 4945 | 2.3459 | 0.4889 |
| 0.345 | 24.0 | 5160 | 3.1195 | 0.4222 |
| 0.3045 | 25.0 | 5375 | 2.4187 | 0.4667 |
| 0.2936 | 26.0 | 5590 | 2.9167 | 0.3556 |
| 0.249 | 27.0 | 5805 | 2.5521 | 0.4667 |
| 0.2161 | 28.0 | 6020 | 3.7842 | 0.3778 |
| 0.2382 | 29.0 | 6235 | 3.0584 | 0.4 |
| 0.1225 | 30.0 | 6450 | 4.4557 | 0.4 |
| 0.2075 | 31.0 | 6665 | 4.7131 | 0.3111 |
| 0.1575 | 32.0 | 6880 | 3.8714 | 0.3556 |
| 0.1516 | 33.0 | 7095 | 4.5510 | 0.4 |
| 0.1231 | 34.0 | 7310 | 5.0636 | 0.3778 |
| 0.0943 | 35.0 | 7525 | 4.2212 | 0.4 |
| 0.0741 | 36.0 | 7740 | 4.4947 | 0.4 |
| 0.0582 | 37.0 | 7955 | 4.8808 | 0.4222 |
| 0.0412 | 38.0 | 8170 | 5.2254 | 0.3778 |
| 0.0508 | 39.0 | 8385 | 5.2558 | 0.3556 |
| 0.0566 | 40.0 | 8600 | 5.9529 | 0.3556 |
| 0.0397 | 41.0 | 8815 | 5.9087 | 0.3333 |
| 0.0462 | 42.0 | 9030 | 6.2634 | 0.4444 |
| 0.0245 | 43.0 | 9245 | 6.0294 | 0.4222 |
| 0.0398 | 44.0 | 9460 | 6.9015 | 0.4222 |
| 0.0182 | 45.0 | 9675 | 5.5112 | 0.4667 |
| 0.0162 | 46.0 | 9890 | 6.0476 | 0.4889 |
| 0.0028 | 47.0 | 10105 | 6.5416 | 0.4667 |
| 0.0087 | 48.0 | 10320 | 6.8964 | 0.4444 |
| 0.0011 | 49.0 | 10535 | 7.0908 | 0.4222 |
| 0.0007 | 50.0 | 10750 | 7.2597 | 0.4222 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_GPT4_temp0_Seed102
|
behzadnet
| 2023-12-25T18:53:01Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-22T18:38:30Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
codingaslu/bloom_prompt_tuning_1703529701.4437609
|
codingaslu
| 2023-12-25T18:48:26Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2023-12-25T18:48:25Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
duygucakir/emotion-analysis-with-distilbert
|
duygucakir
| 2023-12-25T18:39:04Z | 1 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-25T17:59:03Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: duygucakir/emotion-analysis-with-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# duygucakir/emotion-analysis-with-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1399
- Validation Loss: 0.1659
- Train Accuracy: 0.9315
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.4126 | 0.1680 | 0.9325 | 0 |
| 0.1399 | 0.1659 | 0.9315 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
sanghakoh/distilbert-base-uncased-finetuned-squad
|
sanghakoh
| 2023-12-25T18:23:56Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-12-25T13:39:34Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4893 | 1.0 | 1384 | 1.2797 |
| 1.1182 | 2.0 | 2768 | 1.1815 |
| 0.9786 | 3.0 | 4152 | 1.1718 |
### Framework versions
- Transformers 4.33.2
- Pytorch 1.13.1+cu117
- Datasets 2.16.0
- Tokenizers 0.13.3
|
Simplicity-Ai/OpenDallE
|
Simplicity-Ai
| 2023-12-25T17:55:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-25T17:32:44Z |
---
license: creativeml-openrail-m
---
|
s4ouvik/multilingual_llm
|
s4ouvik
| 2023-12-25T17:51:03Z | 23 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-12-25T15:03:46Z |
---
license: apache-2.0
base_model: t5-small
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: multilingual_llm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multilingual_llm
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4068
- Bleu: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
Simplicity-Ai/drmshpr
|
Simplicity-Ai
| 2023-12-25T17:46:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-25T17:30:52Z |
---
license: creativeml-openrail-m
---
|
sr5434/SDXL-v1.0-sfx-step-800
|
sr5434
| 2023-12-25T17:40:43Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"en",
"dataset:vucinatim/spectrogram-captions",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2023-12-25T16:58:42Z |
---
license: openrail
datasets:
- vucinatim/spectrogram-captions
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
SDXL 1.0 finetunes on vucinatim/spectrogram-captions for 89 epochs(800 steps). It generates spectrograms for simple sounds. It currently does not produce very good sound effects, but I will train the model for longer in the future.
|
Simplicity-Ai/sdxl
|
Simplicity-Ai
| 2023-12-25T17:33:16Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-25T17:24:40Z |
---
license: creativeml-openrail-m
---
|
kunalcac/sks_kcachh
|
kunalcac
| 2023-12-25T17:30:17Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-12-25T17:30:15Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks kcachh
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
hkivancoral/hushem_40x_deit_small_sgd_0001_fold5
|
hkivancoral
| 2023-12-25T17:26:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T17:10:40Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_sgd_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4634146341463415
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_0001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1092
- Accuracy: 0.4634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7467 | 1.0 | 220 | 1.6098 | 0.2683 |
| 1.5306 | 2.0 | 440 | 1.5314 | 0.2683 |
| 1.3989 | 3.0 | 660 | 1.5004 | 0.2439 |
| 1.3588 | 4.0 | 880 | 1.4811 | 0.2195 |
| 1.3953 | 5.0 | 1100 | 1.4639 | 0.2683 |
| 1.3096 | 6.0 | 1320 | 1.4476 | 0.2439 |
| 1.2743 | 7.0 | 1540 | 1.4329 | 0.2683 |
| 1.2405 | 8.0 | 1760 | 1.4190 | 0.2927 |
| 1.253 | 9.0 | 1980 | 1.4052 | 0.3171 |
| 1.2253 | 10.0 | 2200 | 1.3912 | 0.3171 |
| 1.1663 | 11.0 | 2420 | 1.3767 | 0.3659 |
| 1.1699 | 12.0 | 2640 | 1.3616 | 0.3659 |
| 1.1615 | 13.0 | 2860 | 1.3463 | 0.3659 |
| 1.0999 | 14.0 | 3080 | 1.3303 | 0.3902 |
| 1.1286 | 15.0 | 3300 | 1.3148 | 0.3659 |
| 1.1333 | 16.0 | 3520 | 1.2990 | 0.3659 |
| 1.075 | 17.0 | 3740 | 1.2842 | 0.3659 |
| 1.0779 | 18.0 | 3960 | 1.2709 | 0.3659 |
| 1.0652 | 19.0 | 4180 | 1.2579 | 0.3659 |
| 1.0475 | 20.0 | 4400 | 1.2462 | 0.3659 |
| 1.0095 | 21.0 | 4620 | 1.2350 | 0.3902 |
| 1.0607 | 22.0 | 4840 | 1.2247 | 0.3902 |
| 1.0243 | 23.0 | 5060 | 1.2151 | 0.4146 |
| 1.0174 | 24.0 | 5280 | 1.2064 | 0.4146 |
| 0.9654 | 25.0 | 5500 | 1.1977 | 0.3902 |
| 1.017 | 26.0 | 5720 | 1.1899 | 0.4146 |
| 1.0002 | 27.0 | 5940 | 1.1820 | 0.3902 |
| 1.0191 | 28.0 | 6160 | 1.1750 | 0.3902 |
| 0.9876 | 29.0 | 6380 | 1.1683 | 0.3902 |
| 0.9526 | 30.0 | 6600 | 1.1623 | 0.4146 |
| 0.9957 | 31.0 | 6820 | 1.1566 | 0.4390 |
| 0.9778 | 32.0 | 7040 | 1.1513 | 0.4390 |
| 0.9223 | 33.0 | 7260 | 1.1464 | 0.4634 |
| 0.9281 | 34.0 | 7480 | 1.1418 | 0.4634 |
| 0.9107 | 35.0 | 7700 | 1.1376 | 0.4634 |
| 0.9485 | 36.0 | 7920 | 1.1336 | 0.4634 |
| 0.9035 | 37.0 | 8140 | 1.1298 | 0.4634 |
| 0.9223 | 38.0 | 8360 | 1.1266 | 0.4634 |
| 0.9312 | 39.0 | 8580 | 1.1235 | 0.4634 |
| 0.8782 | 40.0 | 8800 | 1.1209 | 0.4634 |
| 0.9252 | 41.0 | 9020 | 1.1184 | 0.4634 |
| 0.8989 | 42.0 | 9240 | 1.1164 | 0.4634 |
| 0.8959 | 43.0 | 9460 | 1.1145 | 0.4634 |
| 0.8589 | 44.0 | 9680 | 1.1130 | 0.4634 |
| 0.8899 | 45.0 | 9900 | 1.1117 | 0.4634 |
| 0.8915 | 46.0 | 10120 | 1.1107 | 0.4634 |
| 0.9043 | 47.0 | 10340 | 1.1100 | 0.4634 |
| 0.8309 | 48.0 | 10560 | 1.1095 | 0.4634 |
| 0.8724 | 49.0 | 10780 | 1.1093 | 0.4634 |
| 0.9011 | 50.0 | 11000 | 1.1092 | 0.4634 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_small_sgd_0001_fold4
|
hkivancoral
| 2023-12-25T17:10:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T16:54:45Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_sgd_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5238095238095238
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_0001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1108
- Accuracy: 0.5238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.763 | 1.0 | 219 | 1.5785 | 0.2857 |
| 1.5719 | 2.0 | 438 | 1.5003 | 0.2857 |
| 1.452 | 3.0 | 657 | 1.4620 | 0.2143 |
| 1.4006 | 4.0 | 876 | 1.4368 | 0.2143 |
| 1.3854 | 5.0 | 1095 | 1.4164 | 0.2143 |
| 1.3041 | 6.0 | 1314 | 1.3988 | 0.2381 |
| 1.296 | 7.0 | 1533 | 1.3830 | 0.2619 |
| 1.276 | 8.0 | 1752 | 1.3685 | 0.2381 |
| 1.2474 | 9.0 | 1971 | 1.3546 | 0.2381 |
| 1.2128 | 10.0 | 2190 | 1.3420 | 0.2381 |
| 1.2113 | 11.0 | 2409 | 1.3297 | 0.2381 |
| 1.2121 | 12.0 | 2628 | 1.3176 | 0.2619 |
| 1.1861 | 13.0 | 2847 | 1.3062 | 0.2619 |
| 1.1756 | 14.0 | 3066 | 1.2946 | 0.3095 |
| 1.1431 | 15.0 | 3285 | 1.2837 | 0.3571 |
| 1.1487 | 16.0 | 3504 | 1.2730 | 0.3095 |
| 1.1705 | 17.0 | 3723 | 1.2625 | 0.3095 |
| 1.1482 | 18.0 | 3942 | 1.2522 | 0.2857 |
| 1.1037 | 19.0 | 4161 | 1.2421 | 0.3095 |
| 1.0872 | 20.0 | 4380 | 1.2325 | 0.3810 |
| 1.1026 | 21.0 | 4599 | 1.2229 | 0.4048 |
| 1.0517 | 22.0 | 4818 | 1.2135 | 0.4048 |
| 1.0226 | 23.0 | 5037 | 1.2052 | 0.4286 |
| 1.0485 | 24.0 | 5256 | 1.1974 | 0.4286 |
| 1.0319 | 25.0 | 5475 | 1.1896 | 0.4286 |
| 0.9983 | 26.0 | 5694 | 1.1821 | 0.4286 |
| 1.0014 | 27.0 | 5913 | 1.1755 | 0.4048 |
| 1.0162 | 28.0 | 6132 | 1.1694 | 0.4048 |
| 0.986 | 29.0 | 6351 | 1.1635 | 0.4048 |
| 0.9747 | 30.0 | 6570 | 1.1582 | 0.4286 |
| 0.9811 | 31.0 | 6789 | 1.1532 | 0.4286 |
| 0.9907 | 32.0 | 7008 | 1.1482 | 0.4286 |
| 0.9904 | 33.0 | 7227 | 1.1437 | 0.4286 |
| 0.9293 | 34.0 | 7446 | 1.1399 | 0.4524 |
| 0.9752 | 35.0 | 7665 | 1.1362 | 0.4524 |
| 0.9789 | 36.0 | 7884 | 1.1326 | 0.4762 |
| 0.9516 | 37.0 | 8103 | 1.1293 | 0.5 |
| 0.9703 | 38.0 | 8322 | 1.1262 | 0.5 |
| 0.8944 | 39.0 | 8541 | 1.1236 | 0.5238 |
| 0.9388 | 40.0 | 8760 | 1.1213 | 0.5238 |
| 0.9573 | 41.0 | 8979 | 1.1191 | 0.5238 |
| 0.9441 | 42.0 | 9198 | 1.1172 | 0.5238 |
| 0.9438 | 43.0 | 9417 | 1.1156 | 0.5238 |
| 0.9221 | 44.0 | 9636 | 1.1141 | 0.5238 |
| 0.9079 | 45.0 | 9855 | 1.1130 | 0.5238 |
| 0.962 | 46.0 | 10074 | 1.1121 | 0.5238 |
| 0.9464 | 47.0 | 10293 | 1.1114 | 0.5238 |
| 0.9323 | 48.0 | 10512 | 1.1110 | 0.5238 |
| 0.9581 | 49.0 | 10731 | 1.1108 | 0.5238 |
| 0.942 | 50.0 | 10950 | 1.1108 | 0.5238 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_small_sgd_001_fold4
|
hkivancoral
| 2023-12-25T17:10:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T16:54:32Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_sgd_001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8809523809523809
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2365
- Accuracy: 0.8810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2757 | 1.0 | 219 | 1.3298 | 0.2619 |
| 1.0766 | 2.0 | 438 | 1.1919 | 0.4048 |
| 0.9095 | 3.0 | 657 | 1.0786 | 0.5476 |
| 0.7507 | 4.0 | 876 | 0.9821 | 0.5476 |
| 0.6994 | 5.0 | 1095 | 0.8850 | 0.5952 |
| 0.5864 | 6.0 | 1314 | 0.8204 | 0.6429 |
| 0.4328 | 7.0 | 1533 | 0.7576 | 0.6905 |
| 0.4293 | 8.0 | 1752 | 0.6999 | 0.7143 |
| 0.3464 | 9.0 | 1971 | 0.6320 | 0.7143 |
| 0.3175 | 10.0 | 2190 | 0.5956 | 0.7381 |
| 0.2382 | 11.0 | 2409 | 0.5588 | 0.7381 |
| 0.2672 | 12.0 | 2628 | 0.5195 | 0.7381 |
| 0.2016 | 13.0 | 2847 | 0.4850 | 0.8095 |
| 0.1832 | 14.0 | 3066 | 0.4528 | 0.8095 |
| 0.1406 | 15.0 | 3285 | 0.4338 | 0.8333 |
| 0.1305 | 16.0 | 3504 | 0.3948 | 0.8571 |
| 0.1504 | 17.0 | 3723 | 0.3785 | 0.8571 |
| 0.1139 | 18.0 | 3942 | 0.3689 | 0.8571 |
| 0.096 | 19.0 | 4161 | 0.3548 | 0.8571 |
| 0.0869 | 20.0 | 4380 | 0.3393 | 0.8571 |
| 0.0874 | 21.0 | 4599 | 0.3057 | 0.8571 |
| 0.0797 | 22.0 | 4818 | 0.2990 | 0.8571 |
| 0.0596 | 23.0 | 5037 | 0.2862 | 0.8571 |
| 0.053 | 24.0 | 5256 | 0.3012 | 0.8810 |
| 0.0562 | 25.0 | 5475 | 0.2885 | 0.8810 |
| 0.0463 | 26.0 | 5694 | 0.2676 | 0.8810 |
| 0.0374 | 27.0 | 5913 | 0.2870 | 0.8810 |
| 0.037 | 28.0 | 6132 | 0.2638 | 0.8810 |
| 0.0341 | 29.0 | 6351 | 0.2690 | 0.8810 |
| 0.0327 | 30.0 | 6570 | 0.2566 | 0.8810 |
| 0.0238 | 31.0 | 6789 | 0.2611 | 0.8810 |
| 0.0256 | 32.0 | 7008 | 0.2643 | 0.8810 |
| 0.0284 | 33.0 | 7227 | 0.2717 | 0.8810 |
| 0.0213 | 34.0 | 7446 | 0.2627 | 0.8810 |
| 0.0191 | 35.0 | 7665 | 0.2395 | 0.8810 |
| 0.0246 | 36.0 | 7884 | 0.2517 | 0.8810 |
| 0.0207 | 37.0 | 8103 | 0.2515 | 0.8810 |
| 0.0134 | 38.0 | 8322 | 0.2484 | 0.8810 |
| 0.0162 | 39.0 | 8541 | 0.2279 | 0.8810 |
| 0.0165 | 40.0 | 8760 | 0.2516 | 0.8810 |
| 0.0146 | 41.0 | 8979 | 0.2253 | 0.8810 |
| 0.0168 | 42.0 | 9198 | 0.2425 | 0.8810 |
| 0.0155 | 43.0 | 9417 | 0.2370 | 0.8810 |
| 0.0145 | 44.0 | 9636 | 0.2352 | 0.8810 |
| 0.0118 | 45.0 | 9855 | 0.2414 | 0.8810 |
| 0.0107 | 46.0 | 10074 | 0.2338 | 0.8810 |
| 0.0124 | 47.0 | 10293 | 0.2350 | 0.8810 |
| 0.0125 | 48.0 | 10512 | 0.2352 | 0.8810 |
| 0.0138 | 49.0 | 10731 | 0.2367 | 0.8810 |
| 0.0183 | 50.0 | 10950 | 0.2365 | 0.8810 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
weakit-v/tinyroberta-squad2-onnx
|
weakit-v
| 2023-12-25T16:58:36Z | 4 | 0 |
transformers
|
[
"transformers",
"onnx",
"roberta",
"question-answering",
"en",
"dataset:squad_v2",
"arxiv:1909.10351",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-12-25T16:55:37Z |
---
language: en
license: cc-by-4.0
datasets:
- squad_v2
model-index:
- name: deepset/tinyroberta-squad2
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- type: exact_match
value: 78.8627
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDNlZDU4ODAxMzY5NGFiMTMyZmQ1M2ZhZjMyODA1NmFlOGMxNzYxNTA4OGE5YTBkZWViZjBkNGQ2ZmMxZjVlMCIsInZlcnNpb24iOjF9.Wgu599r6TvgMLTrHlLMVAbUtKD_3b70iJ5QSeDQ-bRfUsVk6Sz9OsJCp47riHJVlmSYzcDj_z_3jTcUjCFFXBg
- type: f1
value: 82.0355
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTFkMzEzMWNiZDRhMGZlODhkYzcwZTZiMDFjZDg2YjllZmUzYWM5NTgwNGQ2NGYyMDk2ZGQwN2JmMTE5NTc3YiIsInZlcnNpb24iOjF9.ChgaYpuRHd5WeDFjtiAHUyczxtoOD_M5WR8834jtbf7wXhdGOnZKdZ1KclmhoI5NuAGc1NptX-G0zQ5FTHEcBA
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- type: exact_match
value: 83.860
name: Exact Match
- type: f1
value: 90.752
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- type: exact_match
value: 25.967
name: Exact Match
- type: f1
value: 37.006
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_adversarial
type: squad_adversarial
config: AddOneSent
split: validation
metrics:
- type: exact_match
value: 76.329
name: Exact Match
- type: f1
value: 83.292
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts amazon
type: squadshifts
config: amazon
split: test
metrics:
- type: exact_match
value: 63.915
name: Exact Match
- type: f1
value: 78.395
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts new_wiki
type: squadshifts
config: new_wiki
split: test
metrics:
- type: exact_match
value: 80.297
name: Exact Match
- type: f1
value: 89.808
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts nyt
type: squadshifts
config: nyt
split: test
metrics:
- type: exact_match
value: 80.149
name: Exact Match
- type: f1
value: 88.321
name: F1
- task:
type: question-answering
name: Question Answering
dataset:
name: squadshifts reddit
type: squadshifts
config: reddit
split: test
metrics:
- type: exact_match
value: 66.959
name: Exact Match
- type: f1
value: 79.300
name: F1
---
**This repo contains the model exported to ONNX weights.**
**Everything is provided as-is.**
---
# tinyroberta-squad2
This is the *distilled* version of the [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) model. This model has a comparable prediction quality and runs at twice the speed of the base model.
## Overview
**Language model:** tinyroberta-squad2
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [an example QA pipeline on Haystack](https://haystack.deepset.ai/tutorials/first-qa-system)
**Infrastructure**: 4x Tesla v100
## Hyperparameters
```
batch_size = 96
n_epochs = 4
base_LM_model = "deepset/tinyroberta-squad2-step1"
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride = 128
max_query_length = 64
distillation_loss_weight = 0.75
temperature = 1.5
teacher = "deepset/robert-large-squad2"
```
## Distillation
This model was distilled using the TinyBERT approach described in [this paper](https://arxiv.org/pdf/1909.10351.pdf) and implemented in [haystack](https://github.com/deepset-ai/haystack).
Firstly, we have performed intermediate layer distillation with roberta-base as the teacher which resulted in [deepset/tinyroberta-6l-768d](https://huggingface.co/deepset/tinyroberta-6l-768d).
Secondly, we have performed task-specific distillation with [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) as the teacher for further intermediate layer distillation on an augmented version of SQuADv2 and then with [deepset/roberta-large-squad2](https://huggingface.co/deepset/roberta-large-squad2) as the teacher for prediction layer distillation.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/tinyroberta-squad2")
# or
reader = TransformersReader(model_name_or_path="deepset/tinyroberta-squad2")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/tinyroberta-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 78.69114798281817,
"f1": 81.9198998536977,
"total": 11873,
"HasAns_exact": 76.19770580296895,
"HasAns_f1": 82.66446878592329,
"HasAns_total": 5928,
"NoAns_exact": 81.17746005046257,
"NoAns_f1": 81.17746005046257,
"NoAns_total": 5945
```
## Authors
**Branden Chan:** branden.chan@deepset.ai
**Timo Möller:** timo.moeller@deepset.ai
**Malte Pietsch:** malte.pietsch@deepset.ai
**Tanay Soni:** tanay.soni@deepset.ai
**Michel Bartels:** michel.bartels@deepset.ai
## About us
<div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
</div>
<div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
<img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
</div>
</div>
[deepset](http://deepset.ai/) is the company behind the open-source NLP framework [Haystack](https://haystack.deepset.ai/) which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- [roberta-base-squad2]([https://huggingface.co/deepset/roberta-base-squad2)
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
## Get in touch and join the Haystack community
<p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community/join">Discord community open to everyone!</a></strong></p>
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
hkivancoral/hushem_40x_deit_small_sgd_0001_fold3
|
hkivancoral
| 2023-12-25T16:54:34Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T16:38:42Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_sgd_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5813953488372093
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_0001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0466
- Accuracy: 0.5814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6887 | 1.0 | 217 | 1.4610 | 0.2558 |
| 1.5962 | 2.0 | 434 | 1.3845 | 0.3488 |
| 1.5124 | 3.0 | 651 | 1.3560 | 0.3721 |
| 1.442 | 4.0 | 868 | 1.3419 | 0.3721 |
| 1.41 | 5.0 | 1085 | 1.3313 | 0.3488 |
| 1.3709 | 6.0 | 1302 | 1.3218 | 0.3721 |
| 1.3157 | 7.0 | 1519 | 1.3125 | 0.3721 |
| 1.3328 | 8.0 | 1736 | 1.3039 | 0.3488 |
| 1.3107 | 9.0 | 1953 | 1.2950 | 0.3488 |
| 1.2568 | 10.0 | 2170 | 1.2861 | 0.3488 |
| 1.2226 | 11.0 | 2387 | 1.2769 | 0.3256 |
| 1.198 | 12.0 | 2604 | 1.2671 | 0.3256 |
| 1.232 | 13.0 | 2821 | 1.2570 | 0.3488 |
| 1.1803 | 14.0 | 3038 | 1.2472 | 0.3488 |
| 1.214 | 15.0 | 3255 | 1.2376 | 0.3488 |
| 1.208 | 16.0 | 3472 | 1.2274 | 0.3953 |
| 1.1406 | 17.0 | 3689 | 1.2176 | 0.3953 |
| 1.1243 | 18.0 | 3906 | 1.2072 | 0.3953 |
| 1.1316 | 19.0 | 4123 | 1.1970 | 0.4884 |
| 1.1119 | 20.0 | 4340 | 1.1873 | 0.4884 |
| 1.117 | 21.0 | 4557 | 1.1775 | 0.5116 |
| 1.0609 | 22.0 | 4774 | 1.1681 | 0.5116 |
| 1.0751 | 23.0 | 4991 | 1.1588 | 0.5581 |
| 1.058 | 24.0 | 5208 | 1.1499 | 0.5581 |
| 1.0301 | 25.0 | 5425 | 1.1417 | 0.5581 |
| 1.089 | 26.0 | 5642 | 1.1338 | 0.5581 |
| 0.9909 | 27.0 | 5859 | 1.1255 | 0.5814 |
| 0.9932 | 28.0 | 6076 | 1.1180 | 0.5814 |
| 1.026 | 29.0 | 6293 | 1.1110 | 0.5814 |
| 1.0236 | 30.0 | 6510 | 1.1044 | 0.5814 |
| 1.0169 | 31.0 | 6727 | 1.0980 | 0.5814 |
| 1.0049 | 32.0 | 6944 | 1.0921 | 0.5814 |
| 1.0261 | 33.0 | 7161 | 1.0868 | 0.5814 |
| 0.994 | 34.0 | 7378 | 1.0819 | 0.5814 |
| 0.9887 | 35.0 | 7595 | 1.0769 | 0.5581 |
| 1.0137 | 36.0 | 7812 | 1.0725 | 0.5581 |
| 0.9359 | 37.0 | 8029 | 1.0687 | 0.5581 |
| 0.9531 | 38.0 | 8246 | 1.0651 | 0.5581 |
| 0.9682 | 39.0 | 8463 | 1.0620 | 0.5581 |
| 0.9947 | 40.0 | 8680 | 1.0590 | 0.5581 |
| 0.9063 | 41.0 | 8897 | 1.0565 | 0.5581 |
| 1.0195 | 42.0 | 9114 | 1.0543 | 0.5581 |
| 0.966 | 43.0 | 9331 | 1.0523 | 0.5581 |
| 0.9409 | 44.0 | 9548 | 1.0506 | 0.5581 |
| 0.9327 | 45.0 | 9765 | 1.0492 | 0.5581 |
| 0.9575 | 46.0 | 9982 | 1.0481 | 0.5814 |
| 0.9627 | 47.0 | 10199 | 1.0474 | 0.5814 |
| 0.9553 | 48.0 | 10416 | 1.0469 | 0.5814 |
| 0.9631 | 49.0 | 10633 | 1.0467 | 0.5814 |
| 0.944 | 50.0 | 10850 | 1.0466 | 0.5814 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_small_sgd_001_fold3
|
hkivancoral
| 2023-12-25T16:54:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T16:38:34Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_sgd_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8837209302325582
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3304
- Accuracy: 0.8837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2372 | 1.0 | 217 | 1.2798 | 0.3488 |
| 1.0621 | 2.0 | 434 | 1.1335 | 0.5814 |
| 0.8881 | 3.0 | 651 | 1.0243 | 0.5814 |
| 0.7868 | 4.0 | 868 | 0.9174 | 0.6279 |
| 0.6948 | 5.0 | 1085 | 0.8587 | 0.6279 |
| 0.5714 | 6.0 | 1302 | 0.7810 | 0.7209 |
| 0.4585 | 7.0 | 1519 | 0.7011 | 0.8140 |
| 0.4277 | 8.0 | 1736 | 0.6580 | 0.7907 |
| 0.3688 | 9.0 | 1953 | 0.6164 | 0.8140 |
| 0.2836 | 10.0 | 2170 | 0.5578 | 0.8140 |
| 0.2148 | 11.0 | 2387 | 0.5322 | 0.8140 |
| 0.2211 | 12.0 | 2604 | 0.5199 | 0.8140 |
| 0.2014 | 13.0 | 2821 | 0.4865 | 0.8140 |
| 0.1799 | 14.0 | 3038 | 0.4815 | 0.8140 |
| 0.1565 | 15.0 | 3255 | 0.4749 | 0.7907 |
| 0.1129 | 16.0 | 3472 | 0.4440 | 0.8372 |
| 0.0992 | 17.0 | 3689 | 0.4542 | 0.7907 |
| 0.1 | 18.0 | 3906 | 0.4290 | 0.8140 |
| 0.0944 | 19.0 | 4123 | 0.4149 | 0.8140 |
| 0.0856 | 20.0 | 4340 | 0.4111 | 0.8372 |
| 0.0816 | 21.0 | 4557 | 0.4115 | 0.8140 |
| 0.0563 | 22.0 | 4774 | 0.3956 | 0.7907 |
| 0.0625 | 23.0 | 4991 | 0.3834 | 0.7907 |
| 0.0683 | 24.0 | 5208 | 0.3893 | 0.7907 |
| 0.0454 | 25.0 | 5425 | 0.3773 | 0.8140 |
| 0.0571 | 26.0 | 5642 | 0.3874 | 0.7907 |
| 0.0322 | 27.0 | 5859 | 0.3743 | 0.8140 |
| 0.0339 | 28.0 | 6076 | 0.3713 | 0.8372 |
| 0.0345 | 29.0 | 6293 | 0.3616 | 0.8372 |
| 0.0434 | 30.0 | 6510 | 0.3686 | 0.8372 |
| 0.0377 | 31.0 | 6727 | 0.3495 | 0.8605 |
| 0.0295 | 32.0 | 6944 | 0.3476 | 0.8372 |
| 0.0279 | 33.0 | 7161 | 0.3534 | 0.8605 |
| 0.0232 | 34.0 | 7378 | 0.3489 | 0.8372 |
| 0.0275 | 35.0 | 7595 | 0.3346 | 0.8837 |
| 0.0214 | 36.0 | 7812 | 0.3309 | 0.8605 |
| 0.018 | 37.0 | 8029 | 0.3342 | 0.8605 |
| 0.0167 | 38.0 | 8246 | 0.3289 | 0.8837 |
| 0.0196 | 39.0 | 8463 | 0.3389 | 0.8605 |
| 0.0269 | 40.0 | 8680 | 0.3388 | 0.8605 |
| 0.0126 | 41.0 | 8897 | 0.3309 | 0.8605 |
| 0.0119 | 42.0 | 9114 | 0.3316 | 0.8837 |
| 0.0174 | 43.0 | 9331 | 0.3268 | 0.8837 |
| 0.0199 | 44.0 | 9548 | 0.3304 | 0.8837 |
| 0.0115 | 45.0 | 9765 | 0.3378 | 0.8605 |
| 0.0138 | 46.0 | 9982 | 0.3301 | 0.8837 |
| 0.0107 | 47.0 | 10199 | 0.3312 | 0.8605 |
| 0.0108 | 48.0 | 10416 | 0.3294 | 0.9070 |
| 0.0125 | 49.0 | 10633 | 0.3301 | 0.8837 |
| 0.0148 | 50.0 | 10850 | 0.3304 | 0.8837 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
weakit-v/bge-base-en-v1.5-onnx
|
weakit-v
| 2023-12-25T16:50:36Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"onnx",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-12-25T12:02:54Z |
---
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-base-en-v1.5
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.14925373134328
- type: ap
value: 39.32336517995478
- type: f1
value: 70.16902252611425
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.386825
- type: ap
value: 90.21276917991995
- type: f1
value: 93.37741030006174
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.846000000000004
- type: f1
value: 48.14646269778261
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.754000000000005
- type: map_at_10
value: 55.761
- type: map_at_100
value: 56.330999999999996
- type: map_at_1000
value: 56.333999999999996
- type: map_at_3
value: 51.92
- type: map_at_5
value: 54.010999999999996
- type: mrr_at_1
value: 41.181
- type: mrr_at_10
value: 55.967999999999996
- type: mrr_at_100
value: 56.538
- type: mrr_at_1000
value: 56.542
- type: mrr_at_3
value: 51.980000000000004
- type: mrr_at_5
value: 54.208999999999996
- type: ndcg_at_1
value: 40.754000000000005
- type: ndcg_at_10
value: 63.605000000000004
- type: ndcg_at_100
value: 66.05199999999999
- type: ndcg_at_1000
value: 66.12
- type: ndcg_at_3
value: 55.708
- type: ndcg_at_5
value: 59.452000000000005
- type: precision_at_1
value: 40.754000000000005
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.238
- type: precision_at_5
value: 15.149000000000001
- type: recall_at_1
value: 40.754000000000005
- type: recall_at_10
value: 88.407
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.714
- type: recall_at_5
value: 75.747
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.74884539679369
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 42.8075893810716
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.128470519187736
- type: mrr
value: 74.28065778481289
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.24629081484655
- type: cos_sim_spearman
value: 86.93752309911496
- type: euclidean_pearson
value: 87.58589628573816
- type: euclidean_spearman
value: 88.05622328825284
- type: manhattan_pearson
value: 87.5594959805773
- type: manhattan_spearman
value: 88.19658793233961
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.9512987012987
- type: f1
value: 86.92515357973708
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.10263762928872
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.69711517426737
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.327
- type: map_at_10
value: 44.099
- type: map_at_100
value: 45.525
- type: map_at_1000
value: 45.641999999999996
- type: map_at_3
value: 40.47
- type: map_at_5
value: 42.36
- type: mrr_at_1
value: 39.199
- type: mrr_at_10
value: 49.651
- type: mrr_at_100
value: 50.29
- type: mrr_at_1000
value: 50.329
- type: mrr_at_3
value: 46.924
- type: mrr_at_5
value: 48.548
- type: ndcg_at_1
value: 39.199
- type: ndcg_at_10
value: 50.773
- type: ndcg_at_100
value: 55.67999999999999
- type: ndcg_at_1000
value: 57.495
- type: ndcg_at_3
value: 45.513999999999996
- type: ndcg_at_5
value: 47.703
- type: precision_at_1
value: 39.199
- type: precision_at_10
value: 9.914000000000001
- type: precision_at_100
value: 1.5310000000000001
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 21.984
- type: precision_at_5
value: 15.737000000000002
- type: recall_at_1
value: 32.327
- type: recall_at_10
value: 63.743
- type: recall_at_100
value: 84.538
- type: recall_at_1000
value: 96.089
- type: recall_at_3
value: 48.065000000000005
- type: recall_at_5
value: 54.519
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.671
- type: map_at_10
value: 42.954
- type: map_at_100
value: 44.151
- type: map_at_1000
value: 44.287
- type: map_at_3
value: 39.912
- type: map_at_5
value: 41.798
- type: mrr_at_1
value: 41.465
- type: mrr_at_10
value: 49.351
- type: mrr_at_100
value: 49.980000000000004
- type: mrr_at_1000
value: 50.016000000000005
- type: mrr_at_3
value: 47.144000000000005
- type: mrr_at_5
value: 48.592999999999996
- type: ndcg_at_1
value: 41.465
- type: ndcg_at_10
value: 48.565999999999995
- type: ndcg_at_100
value: 52.76499999999999
- type: ndcg_at_1000
value: 54.749
- type: ndcg_at_3
value: 44.57
- type: ndcg_at_5
value: 46.759
- type: precision_at_1
value: 41.465
- type: precision_at_10
value: 9.107999999999999
- type: precision_at_100
value: 1.433
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 21.423000000000002
- type: precision_at_5
value: 15.414
- type: recall_at_1
value: 32.671
- type: recall_at_10
value: 57.738
- type: recall_at_100
value: 75.86500000000001
- type: recall_at_1000
value: 88.36
- type: recall_at_3
value: 45.626
- type: recall_at_5
value: 51.812000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 41.185
- type: map_at_10
value: 53.929
- type: map_at_100
value: 54.92
- type: map_at_1000
value: 54.967999999999996
- type: map_at_3
value: 50.70400000000001
- type: map_at_5
value: 52.673
- type: mrr_at_1
value: 47.398
- type: mrr_at_10
value: 57.303000000000004
- type: mrr_at_100
value: 57.959
- type: mrr_at_1000
value: 57.985
- type: mrr_at_3
value: 54.932
- type: mrr_at_5
value: 56.464999999999996
- type: ndcg_at_1
value: 47.398
- type: ndcg_at_10
value: 59.653
- type: ndcg_at_100
value: 63.627
- type: ndcg_at_1000
value: 64.596
- type: ndcg_at_3
value: 54.455
- type: ndcg_at_5
value: 57.245000000000005
- type: precision_at_1
value: 47.398
- type: precision_at_10
value: 9.524000000000001
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.389
- type: precision_at_5
value: 16.752
- type: recall_at_1
value: 41.185
- type: recall_at_10
value: 73.193
- type: recall_at_100
value: 90.357
- type: recall_at_1000
value: 97.253
- type: recall_at_3
value: 59.199999999999996
- type: recall_at_5
value: 66.118
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.27
- type: map_at_10
value: 36.223
- type: map_at_100
value: 37.218
- type: map_at_1000
value: 37.293
- type: map_at_3
value: 33.503
- type: map_at_5
value: 35.097
- type: mrr_at_1
value: 29.492
- type: mrr_at_10
value: 38.352000000000004
- type: mrr_at_100
value: 39.188
- type: mrr_at_1000
value: 39.247
- type: mrr_at_3
value: 35.876000000000005
- type: mrr_at_5
value: 37.401
- type: ndcg_at_1
value: 29.492
- type: ndcg_at_10
value: 41.239
- type: ndcg_at_100
value: 46.066
- type: ndcg_at_1000
value: 47.992000000000004
- type: ndcg_at_3
value: 36.11
- type: ndcg_at_5
value: 38.772
- type: precision_at_1
value: 29.492
- type: precision_at_10
value: 6.260000000000001
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 15.104000000000001
- type: precision_at_5
value: 10.644
- type: recall_at_1
value: 27.27
- type: recall_at_10
value: 54.589
- type: recall_at_100
value: 76.70700000000001
- type: recall_at_1000
value: 91.158
- type: recall_at_3
value: 40.974
- type: recall_at_5
value: 47.327000000000005
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.848
- type: map_at_10
value: 26.207
- type: map_at_100
value: 27.478
- type: map_at_1000
value: 27.602
- type: map_at_3
value: 23.405
- type: map_at_5
value: 24.98
- type: mrr_at_1
value: 21.891
- type: mrr_at_10
value: 31.041999999999998
- type: mrr_at_100
value: 32.092
- type: mrr_at_1000
value: 32.151999999999994
- type: mrr_at_3
value: 28.358
- type: mrr_at_5
value: 29.969
- type: ndcg_at_1
value: 21.891
- type: ndcg_at_10
value: 31.585
- type: ndcg_at_100
value: 37.531
- type: ndcg_at_1000
value: 40.256
- type: ndcg_at_3
value: 26.508
- type: ndcg_at_5
value: 28.894
- type: precision_at_1
value: 21.891
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.769
- type: precision_at_5
value: 9.279
- type: recall_at_1
value: 17.848
- type: recall_at_10
value: 43.452
- type: recall_at_100
value: 69.216
- type: recall_at_1000
value: 88.102
- type: recall_at_3
value: 29.18
- type: recall_at_5
value: 35.347
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.94
- type: map_at_10
value: 41.248000000000005
- type: map_at_100
value: 42.495
- type: map_at_1000
value: 42.602000000000004
- type: map_at_3
value: 37.939
- type: map_at_5
value: 39.924
- type: mrr_at_1
value: 37.824999999999996
- type: mrr_at_10
value: 47.041
- type: mrr_at_100
value: 47.83
- type: mrr_at_1000
value: 47.878
- type: mrr_at_3
value: 44.466
- type: mrr_at_5
value: 46.111999999999995
- type: ndcg_at_1
value: 37.824999999999996
- type: ndcg_at_10
value: 47.223
- type: ndcg_at_100
value: 52.394
- type: ndcg_at_1000
value: 54.432
- type: ndcg_at_3
value: 42.032000000000004
- type: ndcg_at_5
value: 44.772
- type: precision_at_1
value: 37.824999999999996
- type: precision_at_10
value: 8.393
- type: precision_at_100
value: 1.2890000000000001
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 19.698
- type: precision_at_5
value: 14.013
- type: recall_at_1
value: 30.94
- type: recall_at_10
value: 59.316
- type: recall_at_100
value: 80.783
- type: recall_at_1000
value: 94.15400000000001
- type: recall_at_3
value: 44.712
- type: recall_at_5
value: 51.932
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.104
- type: map_at_10
value: 36.675999999999995
- type: map_at_100
value: 38.076
- type: map_at_1000
value: 38.189
- type: map_at_3
value: 33.733999999999995
- type: map_at_5
value: 35.287
- type: mrr_at_1
value: 33.904
- type: mrr_at_10
value: 42.55
- type: mrr_at_100
value: 43.434
- type: mrr_at_1000
value: 43.494
- type: mrr_at_3
value: 40.126
- type: mrr_at_5
value: 41.473
- type: ndcg_at_1
value: 33.904
- type: ndcg_at_10
value: 42.414
- type: ndcg_at_100
value: 48.203
- type: ndcg_at_1000
value: 50.437
- type: ndcg_at_3
value: 37.633
- type: ndcg_at_5
value: 39.67
- type: precision_at_1
value: 33.904
- type: precision_at_10
value: 7.82
- type: precision_at_100
value: 1.2409999999999999
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 17.884
- type: precision_at_5
value: 12.648000000000001
- type: recall_at_1
value: 27.104
- type: recall_at_10
value: 53.563
- type: recall_at_100
value: 78.557
- type: recall_at_1000
value: 93.533
- type: recall_at_3
value: 39.92
- type: recall_at_5
value: 45.457
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.707749999999997
- type: map_at_10
value: 36.961
- type: map_at_100
value: 38.158833333333334
- type: map_at_1000
value: 38.270333333333326
- type: map_at_3
value: 34.07183333333334
- type: map_at_5
value: 35.69533333333334
- type: mrr_at_1
value: 32.81875
- type: mrr_at_10
value: 41.293
- type: mrr_at_100
value: 42.116499999999995
- type: mrr_at_1000
value: 42.170249999999996
- type: mrr_at_3
value: 38.83983333333333
- type: mrr_at_5
value: 40.29775
- type: ndcg_at_1
value: 32.81875
- type: ndcg_at_10
value: 42.355
- type: ndcg_at_100
value: 47.41374999999999
- type: ndcg_at_1000
value: 49.5805
- type: ndcg_at_3
value: 37.52825
- type: ndcg_at_5
value: 39.83266666666667
- type: precision_at_1
value: 32.81875
- type: precision_at_10
value: 7.382416666666666
- type: precision_at_100
value: 1.1640833333333334
- type: precision_at_1000
value: 0.15383333333333335
- type: precision_at_3
value: 17.134166666666665
- type: precision_at_5
value: 12.174833333333336
- type: recall_at_1
value: 27.707749999999997
- type: recall_at_10
value: 53.945
- type: recall_at_100
value: 76.191
- type: recall_at_1000
value: 91.101
- type: recall_at_3
value: 40.39083333333334
- type: recall_at_5
value: 46.40083333333333
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.482
- type: map_at_10
value: 33.201
- type: map_at_100
value: 34.107
- type: map_at_1000
value: 34.197
- type: map_at_3
value: 31.174000000000003
- type: map_at_5
value: 32.279
- type: mrr_at_1
value: 29.908
- type: mrr_at_10
value: 36.235
- type: mrr_at_100
value: 37.04
- type: mrr_at_1000
value: 37.105
- type: mrr_at_3
value: 34.355999999999995
- type: mrr_at_5
value: 35.382999999999996
- type: ndcg_at_1
value: 29.908
- type: ndcg_at_10
value: 37.325
- type: ndcg_at_100
value: 41.795
- type: ndcg_at_1000
value: 44.105
- type: ndcg_at_3
value: 33.555
- type: ndcg_at_5
value: 35.266999999999996
- type: precision_at_1
value: 29.908
- type: precision_at_10
value: 5.721
- type: precision_at_100
value: 0.8630000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 14.008000000000001
- type: precision_at_5
value: 9.754999999999999
- type: recall_at_1
value: 26.482
- type: recall_at_10
value: 47.072
- type: recall_at_100
value: 67.27
- type: recall_at_1000
value: 84.371
- type: recall_at_3
value: 36.65
- type: recall_at_5
value: 40.774
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.815
- type: map_at_10
value: 26.369999999999997
- type: map_at_100
value: 27.458
- type: map_at_1000
value: 27.588
- type: map_at_3
value: 23.990000000000002
- type: map_at_5
value: 25.345000000000002
- type: mrr_at_1
value: 22.953000000000003
- type: mrr_at_10
value: 30.342999999999996
- type: mrr_at_100
value: 31.241000000000003
- type: mrr_at_1000
value: 31.319000000000003
- type: mrr_at_3
value: 28.16
- type: mrr_at_5
value: 29.406
- type: ndcg_at_1
value: 22.953000000000003
- type: ndcg_at_10
value: 31.151
- type: ndcg_at_100
value: 36.309000000000005
- type: ndcg_at_1000
value: 39.227000000000004
- type: ndcg_at_3
value: 26.921
- type: ndcg_at_5
value: 28.938000000000002
- type: precision_at_1
value: 22.953000000000003
- type: precision_at_10
value: 5.602
- type: precision_at_100
value: 0.9530000000000001
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 12.606
- type: precision_at_5
value: 9.119
- type: recall_at_1
value: 18.815
- type: recall_at_10
value: 41.574
- type: recall_at_100
value: 64.84400000000001
- type: recall_at_1000
value: 85.406
- type: recall_at_3
value: 29.694
- type: recall_at_5
value: 34.935
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.840999999999998
- type: map_at_10
value: 36.797999999999995
- type: map_at_100
value: 37.993
- type: map_at_1000
value: 38.086999999999996
- type: map_at_3
value: 34.050999999999995
- type: map_at_5
value: 35.379
- type: mrr_at_1
value: 32.649
- type: mrr_at_10
value: 41.025
- type: mrr_at_100
value: 41.878
- type: mrr_at_1000
value: 41.929
- type: mrr_at_3
value: 38.573
- type: mrr_at_5
value: 39.715
- type: ndcg_at_1
value: 32.649
- type: ndcg_at_10
value: 42.142
- type: ndcg_at_100
value: 47.558
- type: ndcg_at_1000
value: 49.643
- type: ndcg_at_3
value: 37.12
- type: ndcg_at_5
value: 38.983000000000004
- type: precision_at_1
value: 32.649
- type: precision_at_10
value: 7.08
- type: precision_at_100
value: 1.1039999999999999
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.698
- type: precision_at_5
value: 11.511000000000001
- type: recall_at_1
value: 27.840999999999998
- type: recall_at_10
value: 54.245
- type: recall_at_100
value: 77.947
- type: recall_at_1000
value: 92.36999999999999
- type: recall_at_3
value: 40.146
- type: recall_at_5
value: 44.951
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.529000000000003
- type: map_at_10
value: 35.010000000000005
- type: map_at_100
value: 36.647
- type: map_at_1000
value: 36.857
- type: map_at_3
value: 31.968000000000004
- type: map_at_5
value: 33.554
- type: mrr_at_1
value: 31.818
- type: mrr_at_10
value: 39.550999999999995
- type: mrr_at_100
value: 40.54
- type: mrr_at_1000
value: 40.596
- type: mrr_at_3
value: 36.726
- type: mrr_at_5
value: 38.416
- type: ndcg_at_1
value: 31.818
- type: ndcg_at_10
value: 40.675
- type: ndcg_at_100
value: 46.548
- type: ndcg_at_1000
value: 49.126
- type: ndcg_at_3
value: 35.829
- type: ndcg_at_5
value: 38.0
- type: precision_at_1
value: 31.818
- type: precision_at_10
value: 7.826
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 16.601
- type: precision_at_5
value: 12.095
- type: recall_at_1
value: 26.529000000000003
- type: recall_at_10
value: 51.03
- type: recall_at_100
value: 77.556
- type: recall_at_1000
value: 93.804
- type: recall_at_3
value: 36.986000000000004
- type: recall_at_5
value: 43.096000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.480999999999998
- type: map_at_10
value: 30.817
- type: map_at_100
value: 31.838
- type: map_at_1000
value: 31.932
- type: map_at_3
value: 28.011999999999997
- type: map_at_5
value: 29.668
- type: mrr_at_1
value: 25.323
- type: mrr_at_10
value: 33.072
- type: mrr_at_100
value: 33.926
- type: mrr_at_1000
value: 33.993
- type: mrr_at_3
value: 30.436999999999998
- type: mrr_at_5
value: 32.092
- type: ndcg_at_1
value: 25.323
- type: ndcg_at_10
value: 35.514
- type: ndcg_at_100
value: 40.489000000000004
- type: ndcg_at_1000
value: 42.908
- type: ndcg_at_3
value: 30.092000000000002
- type: ndcg_at_5
value: 32.989000000000004
- type: precision_at_1
value: 25.323
- type: precision_at_10
value: 5.545
- type: precision_at_100
value: 0.861
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.446
- type: precision_at_5
value: 9.131
- type: recall_at_1
value: 23.480999999999998
- type: recall_at_10
value: 47.825
- type: recall_at_100
value: 70.652
- type: recall_at_1000
value: 88.612
- type: recall_at_3
value: 33.537
- type: recall_at_5
value: 40.542
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.333999999999998
- type: map_at_10
value: 22.524
- type: map_at_100
value: 24.506
- type: map_at_1000
value: 24.715
- type: map_at_3
value: 19.022
- type: map_at_5
value: 20.693
- type: mrr_at_1
value: 29.186
- type: mrr_at_10
value: 41.22
- type: mrr_at_100
value: 42.16
- type: mrr_at_1000
value: 42.192
- type: mrr_at_3
value: 38.013000000000005
- type: mrr_at_5
value: 39.704
- type: ndcg_at_1
value: 29.186
- type: ndcg_at_10
value: 31.167
- type: ndcg_at_100
value: 38.879000000000005
- type: ndcg_at_1000
value: 42.376000000000005
- type: ndcg_at_3
value: 25.817
- type: ndcg_at_5
value: 27.377000000000002
- type: precision_at_1
value: 29.186
- type: precision_at_10
value: 9.693999999999999
- type: precision_at_100
value: 1.8030000000000002
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 19.11
- type: precision_at_5
value: 14.344999999999999
- type: recall_at_1
value: 13.333999999999998
- type: recall_at_10
value: 37.092000000000006
- type: recall_at_100
value: 63.651
- type: recall_at_1000
value: 83.05
- type: recall_at_3
value: 23.74
- type: recall_at_5
value: 28.655
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.151
- type: map_at_10
value: 19.653000000000002
- type: map_at_100
value: 28.053
- type: map_at_1000
value: 29.709000000000003
- type: map_at_3
value: 14.191
- type: map_at_5
value: 16.456
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.4
- type: mrr_at_100
value: 74.715
- type: mrr_at_1000
value: 74.726
- type: mrr_at_3
value: 72.417
- type: mrr_at_5
value: 73.667
- type: ndcg_at_1
value: 54.25
- type: ndcg_at_10
value: 40.77
- type: ndcg_at_100
value: 46.359
- type: ndcg_at_1000
value: 54.193000000000005
- type: ndcg_at_3
value: 44.832
- type: ndcg_at_5
value: 42.63
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 32.175
- type: precision_at_100
value: 10.668
- type: precision_at_1000
value: 2.067
- type: precision_at_3
value: 47.667
- type: precision_at_5
value: 41.3
- type: recall_at_1
value: 9.151
- type: recall_at_10
value: 25.003999999999998
- type: recall_at_100
value: 52.976
- type: recall_at_1000
value: 78.315
- type: recall_at_3
value: 15.487
- type: recall_at_5
value: 18.999
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.89999999999999
- type: f1
value: 46.47777925067403
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 73.706
- type: map_at_10
value: 82.423
- type: map_at_100
value: 82.67999999999999
- type: map_at_1000
value: 82.694
- type: map_at_3
value: 81.328
- type: map_at_5
value: 82.001
- type: mrr_at_1
value: 79.613
- type: mrr_at_10
value: 87.07000000000001
- type: mrr_at_100
value: 87.169
- type: mrr_at_1000
value: 87.17
- type: mrr_at_3
value: 86.404
- type: mrr_at_5
value: 86.856
- type: ndcg_at_1
value: 79.613
- type: ndcg_at_10
value: 86.289
- type: ndcg_at_100
value: 87.201
- type: ndcg_at_1000
value: 87.428
- type: ndcg_at_3
value: 84.625
- type: ndcg_at_5
value: 85.53699999999999
- type: precision_at_1
value: 79.613
- type: precision_at_10
value: 10.399
- type: precision_at_100
value: 1.1079999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.473
- type: precision_at_5
value: 20.132
- type: recall_at_1
value: 73.706
- type: recall_at_10
value: 93.559
- type: recall_at_100
value: 97.188
- type: recall_at_1000
value: 98.555
- type: recall_at_3
value: 88.98700000000001
- type: recall_at_5
value: 91.373
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.841
- type: map_at_10
value: 32.643
- type: map_at_100
value: 34.575
- type: map_at_1000
value: 34.736
- type: map_at_3
value: 28.317999999999998
- type: map_at_5
value: 30.964000000000002
- type: mrr_at_1
value: 39.660000000000004
- type: mrr_at_10
value: 48.620000000000005
- type: mrr_at_100
value: 49.384
- type: mrr_at_1000
value: 49.415
- type: mrr_at_3
value: 45.988
- type: mrr_at_5
value: 47.361
- type: ndcg_at_1
value: 39.660000000000004
- type: ndcg_at_10
value: 40.646
- type: ndcg_at_100
value: 47.657
- type: ndcg_at_1000
value: 50.428
- type: ndcg_at_3
value: 36.689
- type: ndcg_at_5
value: 38.211
- type: precision_at_1
value: 39.660000000000004
- type: precision_at_10
value: 11.235000000000001
- type: precision_at_100
value: 1.8530000000000002
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.587999999999997
- type: precision_at_5
value: 18.395
- type: recall_at_1
value: 19.841
- type: recall_at_10
value: 48.135
- type: recall_at_100
value: 74.224
- type: recall_at_1000
value: 90.826
- type: recall_at_3
value: 33.536
- type: recall_at_5
value: 40.311
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.358
- type: map_at_10
value: 64.497
- type: map_at_100
value: 65.362
- type: map_at_1000
value: 65.41900000000001
- type: map_at_3
value: 61.06700000000001
- type: map_at_5
value: 63.317
- type: mrr_at_1
value: 80.716
- type: mrr_at_10
value: 86.10799999999999
- type: mrr_at_100
value: 86.265
- type: mrr_at_1000
value: 86.27
- type: mrr_at_3
value: 85.271
- type: mrr_at_5
value: 85.82499999999999
- type: ndcg_at_1
value: 80.716
- type: ndcg_at_10
value: 72.597
- type: ndcg_at_100
value: 75.549
- type: ndcg_at_1000
value: 76.61
- type: ndcg_at_3
value: 67.874
- type: ndcg_at_5
value: 70.655
- type: precision_at_1
value: 80.716
- type: precision_at_10
value: 15.148
- type: precision_at_100
value: 1.745
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 43.597
- type: precision_at_5
value: 28.351
- type: recall_at_1
value: 40.358
- type: recall_at_10
value: 75.739
- type: recall_at_100
value: 87.259
- type: recall_at_1000
value: 94.234
- type: recall_at_3
value: 65.39500000000001
- type: recall_at_5
value: 70.878
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.80799999999998
- type: ap
value: 86.81350378180757
- type: f1
value: 90.79901248314215
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.096
- type: map_at_10
value: 34.384
- type: map_at_100
value: 35.541
- type: map_at_1000
value: 35.589999999999996
- type: map_at_3
value: 30.496000000000002
- type: map_at_5
value: 32.718
- type: mrr_at_1
value: 22.750999999999998
- type: mrr_at_10
value: 35.024
- type: mrr_at_100
value: 36.125
- type: mrr_at_1000
value: 36.168
- type: mrr_at_3
value: 31.225
- type: mrr_at_5
value: 33.416000000000004
- type: ndcg_at_1
value: 22.750999999999998
- type: ndcg_at_10
value: 41.351
- type: ndcg_at_100
value: 46.92
- type: ndcg_at_1000
value: 48.111
- type: ndcg_at_3
value: 33.439
- type: ndcg_at_5
value: 37.407000000000004
- type: precision_at_1
value: 22.750999999999998
- type: precision_at_10
value: 6.564
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.288
- type: precision_at_5
value: 10.581999999999999
- type: recall_at_1
value: 22.096
- type: recall_at_10
value: 62.771
- type: recall_at_100
value: 88.529
- type: recall_at_1000
value: 97.55
- type: recall_at_3
value: 41.245
- type: recall_at_5
value: 50.788
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.16780665754673
- type: f1
value: 93.96331194859894
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.90606475148198
- type: f1
value: 58.58344986604187
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.14660390047075
- type: f1
value: 74.31533923533614
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.16139878950908
- type: f1
value: 80.18532656824924
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.949880906135085
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.56300351524862
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.196521894371315
- type: mrr
value: 32.22644231694389
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.783
- type: map_at_10
value: 14.549000000000001
- type: map_at_100
value: 18.433
- type: map_at_1000
value: 19.949
- type: map_at_3
value: 10.936
- type: map_at_5
value: 12.514
- type: mrr_at_1
value: 47.368
- type: mrr_at_10
value: 56.42
- type: mrr_at_100
value: 56.908
- type: mrr_at_1000
value: 56.95
- type: mrr_at_3
value: 54.283
- type: mrr_at_5
value: 55.568
- type: ndcg_at_1
value: 45.666000000000004
- type: ndcg_at_10
value: 37.389
- type: ndcg_at_100
value: 34.253
- type: ndcg_at_1000
value: 43.059999999999995
- type: ndcg_at_3
value: 42.725
- type: ndcg_at_5
value: 40.193
- type: precision_at_1
value: 47.368
- type: precision_at_10
value: 27.988000000000003
- type: precision_at_100
value: 8.672
- type: precision_at_1000
value: 2.164
- type: precision_at_3
value: 40.248
- type: precision_at_5
value: 34.737
- type: recall_at_1
value: 6.783
- type: recall_at_10
value: 17.838
- type: recall_at_100
value: 33.672000000000004
- type: recall_at_1000
value: 66.166
- type: recall_at_3
value: 11.849
- type: recall_at_5
value: 14.205000000000002
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.698999999999998
- type: map_at_10
value: 46.556
- type: map_at_100
value: 47.652
- type: map_at_1000
value: 47.68
- type: map_at_3
value: 42.492000000000004
- type: map_at_5
value: 44.763999999999996
- type: mrr_at_1
value: 35.747
- type: mrr_at_10
value: 49.242999999999995
- type: mrr_at_100
value: 50.052
- type: mrr_at_1000
value: 50.068
- type: mrr_at_3
value: 45.867000000000004
- type: mrr_at_5
value: 47.778999999999996
- type: ndcg_at_1
value: 35.717999999999996
- type: ndcg_at_10
value: 54.14600000000001
- type: ndcg_at_100
value: 58.672999999999995
- type: ndcg_at_1000
value: 59.279
- type: ndcg_at_3
value: 46.407
- type: ndcg_at_5
value: 50.181
- type: precision_at_1
value: 35.717999999999996
- type: precision_at_10
value: 8.844000000000001
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 20.993000000000002
- type: precision_at_5
value: 14.791000000000002
- type: recall_at_1
value: 31.698999999999998
- type: recall_at_10
value: 74.693
- type: recall_at_100
value: 94.15299999999999
- type: recall_at_1000
value: 98.585
- type: recall_at_3
value: 54.388999999999996
- type: recall_at_5
value: 63.08200000000001
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.283
- type: map_at_10
value: 85.24000000000001
- type: map_at_100
value: 85.882
- type: map_at_1000
value: 85.897
- type: map_at_3
value: 82.326
- type: map_at_5
value: 84.177
- type: mrr_at_1
value: 82.21000000000001
- type: mrr_at_10
value: 88.228
- type: mrr_at_100
value: 88.32
- type: mrr_at_1000
value: 88.32
- type: mrr_at_3
value: 87.323
- type: mrr_at_5
value: 87.94800000000001
- type: ndcg_at_1
value: 82.17999999999999
- type: ndcg_at_10
value: 88.9
- type: ndcg_at_100
value: 90.079
- type: ndcg_at_1000
value: 90.158
- type: ndcg_at_3
value: 86.18299999999999
- type: ndcg_at_5
value: 87.71799999999999
- type: precision_at_1
value: 82.17999999999999
- type: precision_at_10
value: 13.464
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.693
- type: precision_at_5
value: 24.792
- type: recall_at_1
value: 71.283
- type: recall_at_10
value: 95.742
- type: recall_at_100
value: 99.67200000000001
- type: recall_at_1000
value: 99.981
- type: recall_at_3
value: 87.888
- type: recall_at_5
value: 92.24
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.24267063669042
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.88056988932578
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.903
- type: map_at_10
value: 13.202
- type: map_at_100
value: 15.5
- type: map_at_1000
value: 15.870999999999999
- type: map_at_3
value: 9.407
- type: map_at_5
value: 11.238
- type: mrr_at_1
value: 24.2
- type: mrr_at_10
value: 35.867
- type: mrr_at_100
value: 37.001
- type: mrr_at_1000
value: 37.043
- type: mrr_at_3
value: 32.5
- type: mrr_at_5
value: 34.35
- type: ndcg_at_1
value: 24.2
- type: ndcg_at_10
value: 21.731
- type: ndcg_at_100
value: 30.7
- type: ndcg_at_1000
value: 36.618
- type: ndcg_at_3
value: 20.72
- type: ndcg_at_5
value: 17.954
- type: precision_at_1
value: 24.2
- type: precision_at_10
value: 11.33
- type: precision_at_100
value: 2.4410000000000003
- type: precision_at_1000
value: 0.386
- type: precision_at_3
value: 19.667
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 4.903
- type: recall_at_10
value: 22.962
- type: recall_at_100
value: 49.563
- type: recall_at_1000
value: 78.238
- type: recall_at_3
value: 11.953
- type: recall_at_5
value: 16.067999999999998
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.12694254604078
- type: cos_sim_spearman
value: 80.30141815181918
- type: euclidean_pearson
value: 81.34015449877128
- type: euclidean_spearman
value: 80.13984197010849
- type: manhattan_pearson
value: 81.31767068124086
- type: manhattan_spearman
value: 80.11720513114103
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.13112984010417
- type: cos_sim_spearman
value: 78.03063573402875
- type: euclidean_pearson
value: 83.51928418844804
- type: euclidean_spearman
value: 78.4045235411144
- type: manhattan_pearson
value: 83.49981637388689
- type: manhattan_spearman
value: 78.4042575139372
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.50327987379504
- type: cos_sim_spearman
value: 84.18556767756205
- type: euclidean_pearson
value: 82.69684424327679
- type: euclidean_spearman
value: 83.5368106038335
- type: manhattan_pearson
value: 82.57967581007374
- type: manhattan_spearman
value: 83.43009053133697
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.50756863007814
- type: cos_sim_spearman
value: 82.27204331279108
- type: euclidean_pearson
value: 81.39535251429741
- type: euclidean_spearman
value: 81.84386626336239
- type: manhattan_pearson
value: 81.34281737280695
- type: manhattan_spearman
value: 81.81149375673166
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.8727714856726
- type: cos_sim_spearman
value: 87.95738287792312
- type: euclidean_pearson
value: 86.62920602795887
- type: euclidean_spearman
value: 87.05207355381243
- type: manhattan_pearson
value: 86.53587918472225
- type: manhattan_spearman
value: 86.95382961029586
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.52240359769479
- type: cos_sim_spearman
value: 85.47685776238286
- type: euclidean_pearson
value: 84.25815333483058
- type: euclidean_spearman
value: 85.27415639683198
- type: manhattan_pearson
value: 84.29127757025637
- type: manhattan_spearman
value: 85.30226224917351
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.42501708915708
- type: cos_sim_spearman
value: 86.42276182795041
- type: euclidean_pearson
value: 86.5408207354761
- type: euclidean_spearman
value: 85.46096321750838
- type: manhattan_pearson
value: 86.54177303026881
- type: manhattan_spearman
value: 85.50313151916117
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.86521089250766
- type: cos_sim_spearman
value: 65.94868540323003
- type: euclidean_pearson
value: 67.16569626533084
- type: euclidean_spearman
value: 66.37667004134917
- type: manhattan_pearson
value: 67.1482365102333
- type: manhattan_spearman
value: 66.53240122580029
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.64746265365318
- type: cos_sim_spearman
value: 86.41888825906786
- type: euclidean_pearson
value: 85.27453642725811
- type: euclidean_spearman
value: 85.94095796602544
- type: manhattan_pearson
value: 85.28643660505334
- type: manhattan_spearman
value: 85.95028003260744
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.48903153618527
- type: mrr
value: 96.41081503826601
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.594
- type: map_at_10
value: 69.296
- type: map_at_100
value: 69.782
- type: map_at_1000
value: 69.795
- type: map_at_3
value: 66.23
- type: map_at_5
value: 68.293
- type: mrr_at_1
value: 61.667
- type: mrr_at_10
value: 70.339
- type: mrr_at_100
value: 70.708
- type: mrr_at_1000
value: 70.722
- type: mrr_at_3
value: 68.0
- type: mrr_at_5
value: 69.56700000000001
- type: ndcg_at_1
value: 61.667
- type: ndcg_at_10
value: 74.039
- type: ndcg_at_100
value: 76.103
- type: ndcg_at_1000
value: 76.47800000000001
- type: ndcg_at_3
value: 68.967
- type: ndcg_at_5
value: 71.96900000000001
- type: precision_at_1
value: 61.667
- type: precision_at_10
value: 9.866999999999999
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.111
- type: precision_at_5
value: 18.2
- type: recall_at_1
value: 58.594
- type: recall_at_10
value: 87.422
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 74.217
- type: recall_at_5
value: 81.539
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85049504950496
- type: cos_sim_ap
value: 96.33111544137081
- type: cos_sim_f1
value: 92.35443037974684
- type: cos_sim_precision
value: 93.53846153846153
- type: cos_sim_recall
value: 91.2
- type: dot_accuracy
value: 99.82376237623762
- type: dot_ap
value: 95.38082527310888
- type: dot_f1
value: 90.90909090909092
- type: dot_precision
value: 92.90187891440502
- type: dot_recall
value: 89.0
- type: euclidean_accuracy
value: 99.84851485148515
- type: euclidean_ap
value: 96.32316003996347
- type: euclidean_f1
value: 92.2071392659628
- type: euclidean_precision
value: 92.71991911021233
- type: euclidean_recall
value: 91.7
- type: manhattan_accuracy
value: 99.84851485148515
- type: manhattan_ap
value: 96.3655668249217
- type: manhattan_f1
value: 92.18356026222895
- type: manhattan_precision
value: 92.98067141403867
- type: manhattan_recall
value: 91.4
- type: max_accuracy
value: 99.85049504950496
- type: max_ap
value: 96.3655668249217
- type: max_f1
value: 92.35443037974684
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.94861371629051
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.009430451385
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.61164066427969
- type: mrr
value: 55.49710603938544
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.622620124907662
- type: cos_sim_spearman
value: 31.0678351356163
- type: dot_pearson
value: 30.863727693306814
- type: dot_spearman
value: 31.230306567021255
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 2.011
- type: map_at_100
value: 10.974
- type: map_at_1000
value: 25.819
- type: map_at_3
value: 0.6649999999999999
- type: map_at_5
value: 1.076
- type: mrr_at_1
value: 86.0
- type: mrr_at_10
value: 91.8
- type: mrr_at_100
value: 91.8
- type: mrr_at_1000
value: 91.8
- type: mrr_at_3
value: 91.0
- type: mrr_at_5
value: 91.8
- type: ndcg_at_1
value: 82.0
- type: ndcg_at_10
value: 78.07300000000001
- type: ndcg_at_100
value: 58.231
- type: ndcg_at_1000
value: 51.153000000000006
- type: ndcg_at_3
value: 81.123
- type: ndcg_at_5
value: 81.059
- type: precision_at_1
value: 86.0
- type: precision_at_10
value: 83.0
- type: precision_at_100
value: 59.38
- type: precision_at_1000
value: 22.55
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 86.8
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.2079999999999997
- type: recall_at_100
value: 14.069
- type: recall_at_1000
value: 47.678
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.161
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.809
- type: map_at_10
value: 10.394
- type: map_at_100
value: 16.598
- type: map_at_1000
value: 18.142
- type: map_at_3
value: 5.572
- type: map_at_5
value: 7.1370000000000005
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 46.564
- type: mrr_at_100
value: 47.469
- type: mrr_at_1000
value: 47.469
- type: mrr_at_3
value: 42.177
- type: mrr_at_5
value: 44.524
- type: ndcg_at_1
value: 30.612000000000002
- type: ndcg_at_10
value: 25.701
- type: ndcg_at_100
value: 37.532
- type: ndcg_at_1000
value: 48.757
- type: ndcg_at_3
value: 28.199999999999996
- type: ndcg_at_5
value: 25.987
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 23.469
- type: precision_at_100
value: 7.9799999999999995
- type: precision_at_1000
value: 1.5350000000000001
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.809
- type: recall_at_10
value: 16.887
- type: recall_at_100
value: 48.67
- type: recall_at_1000
value: 82.89699999999999
- type: recall_at_3
value: 6.521000000000001
- type: recall_at_5
value: 9.609
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.57860000000001
- type: ap
value: 13.82629211536393
- type: f1
value: 54.59860966183956
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.38030560271647
- type: f1
value: 59.69685552567865
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.4736717043405
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.92853311080646
- type: cos_sim_ap
value: 77.67872502591382
- type: cos_sim_f1
value: 70.33941236068895
- type: cos_sim_precision
value: 67.63273258645884
- type: cos_sim_recall
value: 73.27176781002639
- type: dot_accuracy
value: 85.79603027954938
- type: dot_ap
value: 73.73786190233379
- type: dot_f1
value: 67.3437901774235
- type: dot_precision
value: 65.67201604814443
- type: dot_recall
value: 69.10290237467018
- type: euclidean_accuracy
value: 86.94045419324074
- type: euclidean_ap
value: 77.6687791535167
- type: euclidean_f1
value: 70.47209214023542
- type: euclidean_precision
value: 67.7207492094381
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.87488823985218
- type: manhattan_ap
value: 77.63373392430728
- type: manhattan_f1
value: 70.40920716112532
- type: manhattan_precision
value: 68.31265508684864
- type: manhattan_recall
value: 72.63852242744063
- type: max_accuracy
value: 86.94045419324074
- type: max_ap
value: 77.67872502591382
- type: max_f1
value: 70.47209214023542
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.67155664221679
- type: cos_sim_ap
value: 85.64591703003417
- type: cos_sim_f1
value: 77.59531005352656
- type: cos_sim_precision
value: 73.60967184801382
- type: cos_sim_recall
value: 82.03726516784724
- type: dot_accuracy
value: 88.41541506578181
- type: dot_ap
value: 84.6482788957769
- type: dot_f1
value: 77.04748541466657
- type: dot_precision
value: 74.02440754931176
- type: dot_recall
value: 80.3279950723745
- type: euclidean_accuracy
value: 88.63080684596576
- type: euclidean_ap
value: 85.44570045321562
- type: euclidean_f1
value: 77.28769403336106
- type: euclidean_precision
value: 72.90600040958427
- type: euclidean_recall
value: 82.22975053895904
- type: manhattan_accuracy
value: 88.59393798269105
- type: manhattan_ap
value: 85.40271361038187
- type: manhattan_f1
value: 77.17606419344392
- type: manhattan_precision
value: 72.4447747078295
- type: manhattan_recall
value: 82.5685247921158
- type: max_accuracy
value: 88.67155664221679
- type: max_ap
value: 85.64591703003417
- type: max_f1
value: 77.59531005352656
license: mit
language:
- en
---
**This repo contains the model exported to ONNX weights.**
**Everything is provided as-is.**
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* 🌟**Updates**🌟 *************
- 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire:
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
- 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao(stxiao@baai.ac.cn) and Zheng Liu(liuzheng@baai.ac.cn).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
|
ysouidi/my_awesome_model_concatenate_event_name_event_description
|
ysouidi
| 2023-12-25T16:48:24Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-25T16:42:45Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model_concatenate_event_name_event_description
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_concatenate_event_name_event_description
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7611
- Accuracy: 0.8431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 13 | 0.9922 | 0.5490 |
| No log | 2.0 | 26 | 0.8272 | 0.7843 |
| No log | 3.0 | 39 | 0.7611 | 0.8431 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cpu
- Datasets 2.16.0
- Tokenizers 0.13.2
|
ysouidi/my_awesome_model_concatenate
|
ysouidi
| 2023-12-25T16:39:51Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-25T14:53:37Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model_concatenate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_concatenate
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9449
- Accuracy: 0.6078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 13 | 0.9961 | 0.6275 |
| No log | 2.0 | 26 | 0.9449 | 0.6078 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cpu
- Datasets 2.16.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_small_sgd_001_fold2
|
hkivancoral
| 2023-12-25T16:38:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T16:22:39Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_sgd_001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7333333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9318
- Accuracy: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.1891 | 1.0 | 215 | 1.3300 | 0.3778 |
| 0.9647 | 2.0 | 430 | 1.2794 | 0.4444 |
| 0.8581 | 3.0 | 645 | 1.2244 | 0.5111 |
| 0.699 | 4.0 | 860 | 1.1784 | 0.5333 |
| 0.6158 | 5.0 | 1075 | 1.1498 | 0.5111 |
| 0.5391 | 6.0 | 1290 | 1.1059 | 0.5556 |
| 0.4953 | 7.0 | 1505 | 1.0650 | 0.5333 |
| 0.4016 | 8.0 | 1720 | 1.0249 | 0.5556 |
| 0.3397 | 9.0 | 1935 | 0.9796 | 0.6222 |
| 0.3003 | 10.0 | 2150 | 0.9463 | 0.7111 |
| 0.246 | 11.0 | 2365 | 0.9270 | 0.7111 |
| 0.1949 | 12.0 | 2580 | 0.9025 | 0.7111 |
| 0.1895 | 13.0 | 2795 | 0.8872 | 0.7111 |
| 0.1659 | 14.0 | 3010 | 0.8723 | 0.7111 |
| 0.1576 | 15.0 | 3225 | 0.8544 | 0.7111 |
| 0.1305 | 16.0 | 3440 | 0.8521 | 0.7111 |
| 0.1123 | 17.0 | 3655 | 0.8414 | 0.7111 |
| 0.1025 | 18.0 | 3870 | 0.8453 | 0.7111 |
| 0.0749 | 19.0 | 4085 | 0.8597 | 0.7111 |
| 0.0854 | 20.0 | 4300 | 0.8467 | 0.7111 |
| 0.0788 | 21.0 | 4515 | 0.8314 | 0.7111 |
| 0.0675 | 22.0 | 4730 | 0.8392 | 0.7111 |
| 0.0523 | 23.0 | 4945 | 0.8293 | 0.7111 |
| 0.0556 | 24.0 | 5160 | 0.8555 | 0.7111 |
| 0.0483 | 25.0 | 5375 | 0.8566 | 0.7111 |
| 0.0417 | 26.0 | 5590 | 0.8533 | 0.7111 |
| 0.0397 | 27.0 | 5805 | 0.8560 | 0.7333 |
| 0.0302 | 28.0 | 6020 | 0.8587 | 0.7333 |
| 0.0286 | 29.0 | 6235 | 0.8633 | 0.7333 |
| 0.0386 | 30.0 | 6450 | 0.8691 | 0.7333 |
| 0.0212 | 31.0 | 6665 | 0.8693 | 0.7333 |
| 0.0221 | 32.0 | 6880 | 0.8714 | 0.7333 |
| 0.0198 | 33.0 | 7095 | 0.8818 | 0.7333 |
| 0.0189 | 34.0 | 7310 | 0.8880 | 0.7333 |
| 0.0167 | 35.0 | 7525 | 0.8939 | 0.7333 |
| 0.0198 | 36.0 | 7740 | 0.9010 | 0.7333 |
| 0.0157 | 37.0 | 7955 | 0.8988 | 0.7333 |
| 0.0177 | 38.0 | 8170 | 0.9154 | 0.7333 |
| 0.0136 | 39.0 | 8385 | 0.9094 | 0.7333 |
| 0.0108 | 40.0 | 8600 | 0.9213 | 0.7333 |
| 0.0119 | 41.0 | 8815 | 0.9173 | 0.7333 |
| 0.0127 | 42.0 | 9030 | 0.9219 | 0.7333 |
| 0.0095 | 43.0 | 9245 | 0.9256 | 0.7333 |
| 0.0124 | 44.0 | 9460 | 0.9223 | 0.7333 |
| 0.0112 | 45.0 | 9675 | 0.9246 | 0.7333 |
| 0.0112 | 46.0 | 9890 | 0.9266 | 0.7333 |
| 0.0102 | 47.0 | 10105 | 0.9301 | 0.7333 |
| 0.0105 | 48.0 | 10320 | 0.9338 | 0.7333 |
| 0.0119 | 49.0 | 10535 | 0.9314 | 0.7333 |
| 0.0144 | 50.0 | 10750 | 0.9318 | 0.7333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
soonchang/a2c-PandaReachDense-v3
|
soonchang
| 2023-12-25T16:31:45Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-25T16:27:18Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ADRIANRICO/Distilbert-finetuned-IMDB
|
ADRIANRICO
| 2023-12-25T16:28:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-25T14:35:51Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: Distilbert-finetuned-IMDB
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9066666666666666
- name: F1
type: f1
value: 0.9065709953659213
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distilbert-finetuned-IMDB
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2736
- Accuracy: 0.9067
- F1: 0.9066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5314 | 1.0 | 40 | 0.3674 | 0.8453 | 0.8431 |
| 0.2634 | 2.0 | 80 | 0.2709 | 0.888 | 0.8876 |
| 0.1826 | 3.0 | 120 | 0.2656 | 0.8933 | 0.8930 |
| 0.1433 | 4.0 | 160 | 0.2822 | 0.8893 | 0.8890 |
| 0.1062 | 5.0 | 200 | 0.2736 | 0.9067 | 0.9066 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
hkivancoral/hushem_40x_deit_small_sgd_0001_fold1
|
hkivancoral
| 2023-12-25T16:22:55Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T16:07:17Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_sgd_0001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.35555555555555557
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_sgd_0001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3807
- Accuracy: 0.3556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7636 | 1.0 | 215 | 1.5271 | 0.2222 |
| 1.5603 | 2.0 | 430 | 1.4695 | 0.2222 |
| 1.4189 | 3.0 | 645 | 1.4485 | 0.2444 |
| 1.4216 | 4.0 | 860 | 1.4404 | 0.3333 |
| 1.3718 | 5.0 | 1075 | 1.4361 | 0.3333 |
| 1.3271 | 6.0 | 1290 | 1.4331 | 0.3556 |
| 1.3291 | 7.0 | 1505 | 1.4309 | 0.3556 |
| 1.2611 | 8.0 | 1720 | 1.4294 | 0.3333 |
| 1.2392 | 9.0 | 1935 | 1.4281 | 0.3333 |
| 1.2352 | 10.0 | 2150 | 1.4273 | 0.3778 |
| 1.2132 | 11.0 | 2365 | 1.4268 | 0.3778 |
| 1.17 | 12.0 | 2580 | 1.4262 | 0.3778 |
| 1.1599 | 13.0 | 2795 | 1.4258 | 0.3778 |
| 1.1465 | 14.0 | 3010 | 1.4259 | 0.3778 |
| 1.1384 | 15.0 | 3225 | 1.4258 | 0.3556 |
| 1.1196 | 16.0 | 3440 | 1.4258 | 0.3333 |
| 1.1235 | 17.0 | 3655 | 1.4254 | 0.3333 |
| 1.092 | 18.0 | 3870 | 1.4252 | 0.3333 |
| 1.0493 | 19.0 | 4085 | 1.4248 | 0.3333 |
| 1.0602 | 20.0 | 4300 | 1.4241 | 0.2889 |
| 1.0537 | 21.0 | 4515 | 1.4232 | 0.2889 |
| 1.0424 | 22.0 | 4730 | 1.4223 | 0.2889 |
| 1.0373 | 23.0 | 4945 | 1.4208 | 0.2889 |
| 1.0255 | 24.0 | 5160 | 1.4191 | 0.3111 |
| 0.9946 | 25.0 | 5375 | 1.4173 | 0.3111 |
| 0.9526 | 26.0 | 5590 | 1.4155 | 0.3111 |
| 0.961 | 27.0 | 5805 | 1.4133 | 0.3111 |
| 0.9603 | 28.0 | 6020 | 1.4115 | 0.3111 |
| 0.9689 | 29.0 | 6235 | 1.4091 | 0.3111 |
| 0.9155 | 30.0 | 6450 | 1.4068 | 0.3111 |
| 0.9244 | 31.0 | 6665 | 1.4046 | 0.3111 |
| 0.9454 | 32.0 | 6880 | 1.4024 | 0.3111 |
| 0.9669 | 33.0 | 7095 | 1.4003 | 0.3111 |
| 0.935 | 34.0 | 7310 | 1.3982 | 0.3333 |
| 0.887 | 35.0 | 7525 | 1.3962 | 0.3333 |
| 0.9142 | 36.0 | 7740 | 1.3943 | 0.3333 |
| 0.9282 | 37.0 | 7955 | 1.3924 | 0.3333 |
| 0.8935 | 38.0 | 8170 | 1.3908 | 0.3333 |
| 0.9345 | 39.0 | 8385 | 1.3890 | 0.3333 |
| 0.8406 | 40.0 | 8600 | 1.3876 | 0.3333 |
| 0.8885 | 41.0 | 8815 | 1.3862 | 0.3333 |
| 0.9974 | 42.0 | 9030 | 1.3851 | 0.3333 |
| 0.9464 | 43.0 | 9245 | 1.3840 | 0.3333 |
| 0.9071 | 44.0 | 9460 | 1.3830 | 0.3333 |
| 0.9277 | 45.0 | 9675 | 1.3823 | 0.3333 |
| 0.8844 | 46.0 | 9890 | 1.3817 | 0.3333 |
| 0.8843 | 47.0 | 10105 | 1.3812 | 0.3333 |
| 0.9119 | 48.0 | 10320 | 1.3809 | 0.3556 |
| 0.9448 | 49.0 | 10535 | 1.3808 | 0.3556 |
| 0.8919 | 50.0 | 10750 | 1.3807 | 0.3556 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Timmek/favotite_loras
|
Timmek
| 2023-12-25T16:22:21Z | 0 | 0 | null |
[
"art",
"text-to-image",
"ru",
"en",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2023-12-25T16:00:11Z |
---
license: apache-2.0
language:
- ru
- en
metrics:
- character
pipeline_tag: text-to-image
tags:
- art
---
EN: Lora from one very cool CIS LoRA creator.
If you liked it, PLEASE support it by subscribing to the telegram channel! https://t.me/+SSjZxIm00wY3NTBi.
RU: Лоры от одного очень крутого СНГ лора-крейтэра https://vk.com/meikerlora
Если вам понравилось, ПОЖАЛУЙСТА, поддержите его рублём.




|
bdsaglam/llama-2-7b-chat-hf-kg-cons-multi-peft-1703505740
|
bdsaglam
| 2023-12-25T16:14:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-12-25T16:14:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
hkivancoral/hushem_40x_deit_small_adamax_00001_fold5
|
hkivancoral
| 2023-12-25T16:00:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T15:43:03Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_adamax_00001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9024390243902439
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_00001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4380
- Accuracy: 0.9024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2763 | 1.0 | 220 | 0.4688 | 0.7561 |
| 0.0369 | 2.0 | 440 | 0.2296 | 0.9024 |
| 0.0054 | 3.0 | 660 | 0.3595 | 0.8537 |
| 0.0018 | 4.0 | 880 | 0.2500 | 0.9024 |
| 0.0006 | 5.0 | 1100 | 0.2205 | 0.9268 |
| 0.0003 | 6.0 | 1320 | 0.2552 | 0.9024 |
| 0.0003 | 7.0 | 1540 | 0.2520 | 0.9024 |
| 0.0002 | 8.0 | 1760 | 0.2923 | 0.8780 |
| 0.0001 | 9.0 | 1980 | 0.2754 | 0.8780 |
| 0.0001 | 10.0 | 2200 | 0.2945 | 0.8780 |
| 0.0001 | 11.0 | 2420 | 0.2841 | 0.9024 |
| 0.0001 | 12.0 | 2640 | 0.3077 | 0.9024 |
| 0.0001 | 13.0 | 2860 | 0.3076 | 0.8780 |
| 0.0 | 14.0 | 3080 | 0.3160 | 0.8780 |
| 0.0 | 15.0 | 3300 | 0.2918 | 0.9024 |
| 0.0 | 16.0 | 3520 | 0.3305 | 0.8780 |
| 0.0 | 17.0 | 3740 | 0.3206 | 0.8780 |
| 0.0 | 18.0 | 3960 | 0.3174 | 0.8780 |
| 0.0 | 19.0 | 4180 | 0.3189 | 0.8780 |
| 0.0 | 20.0 | 4400 | 0.3130 | 0.9024 |
| 0.0 | 21.0 | 4620 | 0.3383 | 0.8780 |
| 0.0 | 22.0 | 4840 | 0.3473 | 0.8780 |
| 0.0 | 23.0 | 5060 | 0.3548 | 0.8780 |
| 0.0 | 24.0 | 5280 | 0.3221 | 0.8780 |
| 0.0 | 25.0 | 5500 | 0.3554 | 0.8780 |
| 0.0 | 26.0 | 5720 | 0.3715 | 0.8780 |
| 0.0 | 27.0 | 5940 | 0.3690 | 0.8780 |
| 0.0 | 28.0 | 6160 | 0.3648 | 0.8780 |
| 0.0 | 29.0 | 6380 | 0.3806 | 0.8780 |
| 0.0 | 30.0 | 6600 | 0.3725 | 0.8780 |
| 0.0 | 31.0 | 6820 | 0.4022 | 0.8780 |
| 0.0 | 32.0 | 7040 | 0.3871 | 0.8780 |
| 0.0 | 33.0 | 7260 | 0.4133 | 0.8780 |
| 0.0 | 34.0 | 7480 | 0.4117 | 0.8780 |
| 0.0 | 35.0 | 7700 | 0.3832 | 0.8780 |
| 0.0 | 36.0 | 7920 | 0.3977 | 0.8780 |
| 0.0 | 37.0 | 8140 | 0.3959 | 0.8780 |
| 0.0 | 38.0 | 8360 | 0.4687 | 0.8780 |
| 0.0 | 39.0 | 8580 | 0.4404 | 0.8780 |
| 0.0 | 40.0 | 8800 | 0.3819 | 0.9024 |
| 0.0 | 41.0 | 9020 | 0.4514 | 0.8780 |
| 0.0 | 42.0 | 9240 | 0.4623 | 0.8780 |
| 0.0 | 43.0 | 9460 | 0.4136 | 0.9024 |
| 0.0 | 44.0 | 9680 | 0.4401 | 0.9024 |
| 0.0 | 45.0 | 9900 | 0.4714 | 0.9024 |
| 0.0 | 46.0 | 10120 | 0.4588 | 0.9024 |
| 0.0 | 47.0 | 10340 | 0.4584 | 0.9024 |
| 0.0 | 48.0 | 10560 | 0.4588 | 0.9024 |
| 0.0 | 49.0 | 10780 | 0.4430 | 0.9024 |
| 0.0 | 50.0 | 11000 | 0.4380 | 0.9024 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_small_adamax_0001_fold5
|
hkivancoral
| 2023-12-25T16:00:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T15:42:54Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_adamax_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926829268292683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_0001_fold5
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4597
- Accuracy: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0516 | 1.0 | 220 | 0.1392 | 0.9512 |
| 0.0011 | 2.0 | 440 | 0.2522 | 0.9268 |
| 0.0002 | 3.0 | 660 | 0.1314 | 0.9268 |
| 0.0006 | 4.0 | 880 | 0.4185 | 0.9024 |
| 0.0 | 5.0 | 1100 | 0.4569 | 0.9024 |
| 0.0 | 6.0 | 1320 | 0.4037 | 0.9024 |
| 0.0 | 7.0 | 1540 | 0.3953 | 0.9024 |
| 0.0 | 8.0 | 1760 | 0.3986 | 0.9024 |
| 0.0 | 9.0 | 1980 | 0.4084 | 0.9024 |
| 0.0 | 10.0 | 2200 | 0.4117 | 0.9024 |
| 0.0 | 11.0 | 2420 | 0.4151 | 0.9024 |
| 0.0 | 12.0 | 2640 | 0.4165 | 0.9024 |
| 0.0 | 13.0 | 2860 | 0.4199 | 0.9024 |
| 0.0 | 14.0 | 3080 | 0.4254 | 0.9024 |
| 0.0 | 15.0 | 3300 | 0.4180 | 0.9024 |
| 0.0 | 16.0 | 3520 | 0.4254 | 0.9024 |
| 0.0 | 17.0 | 3740 | 0.4273 | 0.9024 |
| 0.0 | 18.0 | 3960 | 0.4239 | 0.9024 |
| 0.0 | 19.0 | 4180 | 0.4240 | 0.9024 |
| 0.0 | 20.0 | 4400 | 0.4255 | 0.9024 |
| 0.0 | 21.0 | 4620 | 0.4197 | 0.9024 |
| 0.0 | 22.0 | 4840 | 0.4256 | 0.9024 |
| 0.0 | 23.0 | 5060 | 0.4276 | 0.9024 |
| 0.0 | 24.0 | 5280 | 0.4178 | 0.9024 |
| 0.0 | 25.0 | 5500 | 0.4247 | 0.9024 |
| 0.0 | 26.0 | 5720 | 0.4224 | 0.9024 |
| 0.0 | 27.0 | 5940 | 0.4294 | 0.9024 |
| 0.0 | 28.0 | 6160 | 0.4224 | 0.9268 |
| 0.0 | 29.0 | 6380 | 0.4213 | 0.9268 |
| 0.0 | 30.0 | 6600 | 0.4256 | 0.9268 |
| 0.0 | 31.0 | 6820 | 0.4281 | 0.9268 |
| 0.0 | 32.0 | 7040 | 0.4157 | 0.9268 |
| 0.0 | 33.0 | 7260 | 0.4223 | 0.9268 |
| 0.0 | 34.0 | 7480 | 0.4175 | 0.9268 |
| 0.0 | 35.0 | 7700 | 0.4230 | 0.9268 |
| 0.0 | 36.0 | 7920 | 0.4204 | 0.9268 |
| 0.0 | 37.0 | 8140 | 0.4311 | 0.9268 |
| 0.0 | 38.0 | 8360 | 0.4343 | 0.9268 |
| 0.0 | 39.0 | 8580 | 0.4379 | 0.9268 |
| 0.0 | 40.0 | 8800 | 0.4426 | 0.9268 |
| 0.0 | 41.0 | 9020 | 0.4413 | 0.9268 |
| 0.0 | 42.0 | 9240 | 0.4428 | 0.9268 |
| 0.0 | 43.0 | 9460 | 0.4470 | 0.9268 |
| 0.0 | 44.0 | 9680 | 0.4517 | 0.9268 |
| 0.0 | 45.0 | 9900 | 0.4526 | 0.9268 |
| 0.0 | 46.0 | 10120 | 0.4472 | 0.9268 |
| 0.0 | 47.0 | 10340 | 0.4509 | 0.9268 |
| 0.0 | 48.0 | 10560 | 0.4588 | 0.9268 |
| 0.0 | 49.0 | 10780 | 0.4589 | 0.9268 |
| 0.0 | 50.0 | 11000 | 0.4597 | 0.9268 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
malduwais/bert-base-multilingual-cased-finetuned-ner
|
malduwais
| 2023-12-25T15:54:43Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-25T14:09:28Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-multilingual-cased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-finetuned-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1601
- Precision: 0.8875
- Recall: 0.9009
- F1: 0.8942
- Accuracy: 0.9720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1977 | 1.0 | 878 | 0.0664 | 0.9216 | 0.9346 | 0.9280 | 0.9828 |
| 0.0473 | 2.0 | 1756 | 0.0579 | 0.9491 | 0.9473 | 0.9482 | 0.9871 |
| 0.0278 | 3.0 | 2634 | 0.0549 | 0.9544 | 0.9546 | 0.9545 | 0.9885 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
tastypear/RWKV-v5.2-7B-Role-play-16k-safetensors
|
tastypear
| 2023-12-25T15:50:04Z | 0 | 2 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-12-24T06:13:59Z |
---
license: apache-2.0
---
Original model: [xiaol/RWKV-v5.2-7B-Role-play-16k](https://huggingface.co/xiaol/RWKV-v5.2-7B-Role-play-16k)
You can run this model with [ai00_rwkv_server](https://github.com/cgisky1980/ai00_rwkv_server).
Although ai00_rwkv_server is mainly for lowend PC, you can run it on servers which are support VULKAN.
To try it in Colab, you should install [libnvidia-gl-*](https://packages.ubuntu.com/search?keywords=libnvidia-gl&searchon=names&suite=jammy§ion=all) :
```python
!apt -y install libnvidia-gl-535
```
---
# Original Model Card:
### RWKV (claude) better role play with v5.2, more logic and reasonable , could follow instructions
<!-- <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6176b32847ee6431f632981e/FwcnQrKOhfiid8jq8GsCJ.mp4"></video>-->
|
tastypear/RWKV-v5-12B-one-state-chat-16k-safetensors
|
tastypear
| 2023-12-25T15:43:15Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-12-24T05:17:40Z |
---
license: apache-2.0
---
Original model: [xiaol/RWKV-v5-12B-one-state-chat-16k](https://huggingface.co/xiaol/RWKV-v5-12B-one-state-chat-16k)
You can run this model with [ai00_rwkv_server](https://github.com/cgisky1980/ai00_rwkv_server).
Although ai00_rwkv_server is mainly for lowend PC, you can run it on servers which are support VULKAN.
To try it in Colab, you should install [libnvidia-gl-*](https://packages.ubuntu.com/search?keywords=libnvidia-gl&searchon=names&suite=jammy§ion=all) :
```python
!apt -y install libnvidia-gl-535
```
----
# Original model card:
# Release date: December 18th
finetuned from the state-of-the-art (SOTA) model, RWKV v5 12B one state base! More details will be provided soon. Enjoy the incredible performance of this model, which is optimized for systems with 24GB of VRAM and supports fp16. It can be fine-tuned using a single A100 GPU. To execute this model, utilize the [RWKV Runner](https://github.com/josStorer/RWKV-Runner) tool.
# Finetuned from [Mobius 12B base](https://huggingface.co/xiaol/Mobius-12B-base)
# Usage
- [RWKV next web](https://rwkv-next-web.ai-creator.net/)
- if use with [RWKV runner](https://github.com/josStorer/RWKV-Runner) or [ai00 server](https://github.com/cgisky1980/ai00_rwkv_server), change default vocabs(tokenizer) by [this one](https://huggingface.co/xiaol/RWKV-v5-12B-one-state-chat-16k/blob/main/rwkv_vocab_v20230424.txt)
# Important Notes
After overfitting certain instructions and weakening others, it is necessary to use completion or simulate dialogues.
- **completion prompt** = 'User: make this content longer:\nxxxxxx\n\nAssistant: ok, longer content is'
# Data format
`<s>User:xxxx\n\n</s>Assistant:xxx\n\n</s>User:xxxx\n\n</s>Assistant:xxx\n\n</s>`
If you desire optimal performance to run this model,utilize this format and these [vocabs](https://huggingface.co/xiaol/RWKV-v5-12B-one-state-chat-16k/blob/main/rwkv_vocab_v20230424_train.txt)
|
hkivancoral/hushem_40x_deit_small_adamax_0001_fold4
|
hkivancoral
| 2023-12-25T15:42:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T15:25:20Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_adamax_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9761904761904762
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_0001_fold4
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2200
- Accuracy: 0.9762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0294 | 1.0 | 219 | 0.5356 | 0.8571 |
| 0.0128 | 2.0 | 438 | 0.3893 | 0.9286 |
| 0.0002 | 3.0 | 657 | 0.2025 | 0.9762 |
| 0.0012 | 4.0 | 876 | 0.0996 | 0.9524 |
| 0.0042 | 5.0 | 1095 | 0.3099 | 0.8810 |
| 0.0 | 6.0 | 1314 | 0.3304 | 0.9524 |
| 0.0 | 7.0 | 1533 | 0.0291 | 0.9762 |
| 0.0 | 8.0 | 1752 | 0.3258 | 0.9524 |
| 0.0 | 9.0 | 1971 | 0.2200 | 0.9762 |
| 0.0 | 10.0 | 2190 | 0.2242 | 0.9762 |
| 0.0 | 11.0 | 2409 | 0.2270 | 0.9762 |
| 0.0 | 12.0 | 2628 | 0.2293 | 0.9762 |
| 0.0 | 13.0 | 2847 | 0.2306 | 0.9762 |
| 0.0 | 14.0 | 3066 | 0.2313 | 0.9762 |
| 0.0 | 15.0 | 3285 | 0.2317 | 0.9762 |
| 0.0 | 16.0 | 3504 | 0.2327 | 0.9762 |
| 0.0 | 17.0 | 3723 | 0.2330 | 0.9762 |
| 0.0 | 18.0 | 3942 | 0.2343 | 0.9762 |
| 0.0 | 19.0 | 4161 | 0.2344 | 0.9762 |
| 0.0 | 20.0 | 4380 | 0.2350 | 0.9762 |
| 0.0 | 21.0 | 4599 | 0.2360 | 0.9762 |
| 0.0 | 22.0 | 4818 | 0.2352 | 0.9762 |
| 0.0 | 23.0 | 5037 | 0.2356 | 0.9762 |
| 0.0 | 24.0 | 5256 | 0.2355 | 0.9762 |
| 0.0 | 25.0 | 5475 | 0.2362 | 0.9762 |
| 0.0 | 26.0 | 5694 | 0.2364 | 0.9762 |
| 0.0 | 27.0 | 5913 | 0.2365 | 0.9762 |
| 0.0 | 28.0 | 6132 | 0.2373 | 0.9762 |
| 0.0 | 29.0 | 6351 | 0.2369 | 0.9762 |
| 0.0 | 30.0 | 6570 | 0.2371 | 0.9762 |
| 0.0 | 31.0 | 6789 | 0.2360 | 0.9762 |
| 0.0 | 32.0 | 7008 | 0.2375 | 0.9762 |
| 0.0 | 33.0 | 7227 | 0.2373 | 0.9762 |
| 0.0 | 34.0 | 7446 | 0.2372 | 0.9762 |
| 0.0 | 35.0 | 7665 | 0.2377 | 0.9762 |
| 0.0 | 36.0 | 7884 | 0.2367 | 0.9762 |
| 0.0 | 37.0 | 8103 | 0.2369 | 0.9762 |
| 0.0 | 38.0 | 8322 | 0.2356 | 0.9762 |
| 0.0 | 39.0 | 8541 | 0.2350 | 0.9762 |
| 0.0 | 40.0 | 8760 | 0.2356 | 0.9762 |
| 0.0 | 41.0 | 8979 | 0.2346 | 0.9762 |
| 0.0 | 42.0 | 9198 | 0.2341 | 0.9762 |
| 0.0 | 43.0 | 9417 | 0.2328 | 0.9762 |
| 0.0 | 44.0 | 9636 | 0.2314 | 0.9762 |
| 0.0 | 45.0 | 9855 | 0.2283 | 0.9762 |
| 0.0 | 46.0 | 10074 | 0.2261 | 0.9762 |
| 0.0 | 47.0 | 10293 | 0.2239 | 0.9762 |
| 0.0 | 48.0 | 10512 | 0.2219 | 0.9762 |
| 0.0 | 49.0 | 10731 | 0.2199 | 0.9762 |
| 0.0 | 50.0 | 10950 | 0.2200 | 0.9762 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
prahalath27/Reinforce-pixelcopter
|
prahalath27
| 2023-12-25T15:37:36Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-25T14:51:18Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 74.70 +/- 39.41
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Pruthvirajsp/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters
|
Pruthvirajsp
| 2023-12-25T15:34:52Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-25T14:52:28Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
soonchang/ML-Agents-Pyramids
|
soonchang
| 2023-12-25T15:31:14Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-12-25T15:30:27Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: soonchang/ML-Agents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hkivancoral/hushem_40x_deit_small_adamax_00001_fold3
|
hkivancoral
| 2023-12-25T15:25:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T15:07:58Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_adamax_00001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9302325581395349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_00001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4888
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3152 | 1.0 | 217 | 0.4303 | 0.8140 |
| 0.0395 | 2.0 | 434 | 0.3110 | 0.8605 |
| 0.0052 | 3.0 | 651 | 0.1960 | 0.8837 |
| 0.0014 | 4.0 | 868 | 0.1973 | 0.9070 |
| 0.0007 | 5.0 | 1085 | 0.1799 | 0.9070 |
| 0.0004 | 6.0 | 1302 | 0.1913 | 0.9070 |
| 0.0003 | 7.0 | 1519 | 0.2030 | 0.9070 |
| 0.0002 | 8.0 | 1736 | 0.1949 | 0.9302 |
| 0.0002 | 9.0 | 1953 | 0.2095 | 0.9302 |
| 0.0001 | 10.0 | 2170 | 0.2248 | 0.9302 |
| 0.0001 | 11.0 | 2387 | 0.1957 | 0.9070 |
| 0.0001 | 12.0 | 2604 | 0.2287 | 0.9302 |
| 0.0001 | 13.0 | 2821 | 0.2292 | 0.9302 |
| 0.0 | 14.0 | 3038 | 0.2168 | 0.9302 |
| 0.0 | 15.0 | 3255 | 0.2321 | 0.9302 |
| 0.0 | 16.0 | 3472 | 0.2331 | 0.9302 |
| 0.0 | 17.0 | 3689 | 0.2639 | 0.9302 |
| 0.0 | 18.0 | 3906 | 0.2552 | 0.9302 |
| 0.0 | 19.0 | 4123 | 0.2773 | 0.9302 |
| 0.0 | 20.0 | 4340 | 0.2788 | 0.9302 |
| 0.0 | 21.0 | 4557 | 0.3072 | 0.9302 |
| 0.0 | 22.0 | 4774 | 0.2995 | 0.9302 |
| 0.0 | 23.0 | 4991 | 0.3235 | 0.9302 |
| 0.0 | 24.0 | 5208 | 0.3152 | 0.9302 |
| 0.0 | 25.0 | 5425 | 0.3196 | 0.9302 |
| 0.0 | 26.0 | 5642 | 0.3244 | 0.9302 |
| 0.0 | 27.0 | 5859 | 0.3243 | 0.9302 |
| 0.0 | 28.0 | 6076 | 0.3343 | 0.9302 |
| 0.0 | 29.0 | 6293 | 0.3666 | 0.9302 |
| 0.0 | 30.0 | 6510 | 0.3811 | 0.9302 |
| 0.0 | 31.0 | 6727 | 0.3978 | 0.9302 |
| 0.0 | 32.0 | 6944 | 0.3769 | 0.9302 |
| 0.0 | 33.0 | 7161 | 0.4052 | 0.9302 |
| 0.0 | 34.0 | 7378 | 0.4150 | 0.9302 |
| 0.0 | 35.0 | 7595 | 0.4227 | 0.9302 |
| 0.0 | 36.0 | 7812 | 0.4100 | 0.9302 |
| 0.0 | 37.0 | 8029 | 0.3974 | 0.9302 |
| 0.0 | 38.0 | 8246 | 0.4427 | 0.9302 |
| 0.0 | 39.0 | 8463 | 0.4150 | 0.9302 |
| 0.0 | 40.0 | 8680 | 0.4448 | 0.9302 |
| 0.0 | 41.0 | 8897 | 0.4616 | 0.9302 |
| 0.0 | 42.0 | 9114 | 0.4839 | 0.9302 |
| 0.0 | 43.0 | 9331 | 0.4831 | 0.9302 |
| 0.0 | 44.0 | 9548 | 0.4641 | 0.9302 |
| 0.0 | 45.0 | 9765 | 0.4680 | 0.9302 |
| 0.0 | 46.0 | 9982 | 0.4903 | 0.9302 |
| 0.0 | 47.0 | 10199 | 0.4721 | 0.9302 |
| 0.0 | 48.0 | 10416 | 0.4832 | 0.9302 |
| 0.0 | 49.0 | 10633 | 0.4900 | 0.9302 |
| 0.0 | 50.0 | 10850 | 0.4888 | 0.9302 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_small_adamax_0001_fold3
|
hkivancoral
| 2023-12-25T15:25:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T15:07:52Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_adamax_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9302325581395349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_0001_fold3
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7579
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0315 | 1.0 | 217 | 0.2572 | 0.9070 |
| 0.0049 | 2.0 | 434 | 0.4551 | 0.8837 |
| 0.0004 | 3.0 | 651 | 0.3965 | 0.8837 |
| 0.0001 | 4.0 | 868 | 0.4995 | 0.9070 |
| 0.0 | 5.0 | 1085 | 0.3370 | 0.9535 |
| 0.0 | 6.0 | 1302 | 0.4294 | 0.9302 |
| 0.0 | 7.0 | 1519 | 0.4525 | 0.9302 |
| 0.0 | 8.0 | 1736 | 0.4672 | 0.9302 |
| 0.0 | 9.0 | 1953 | 0.4797 | 0.9302 |
| 0.0 | 10.0 | 2170 | 0.4904 | 0.9302 |
| 0.0 | 11.0 | 2387 | 0.4947 | 0.9302 |
| 0.0 | 12.0 | 2604 | 0.5020 | 0.9302 |
| 0.0 | 13.0 | 2821 | 0.5084 | 0.9302 |
| 0.0 | 14.0 | 3038 | 0.5153 | 0.9302 |
| 0.0 | 15.0 | 3255 | 0.5246 | 0.9302 |
| 0.0 | 16.0 | 3472 | 0.5296 | 0.9302 |
| 0.0 | 17.0 | 3689 | 0.5346 | 0.9302 |
| 0.0 | 18.0 | 3906 | 0.5408 | 0.9302 |
| 0.0 | 19.0 | 4123 | 0.5469 | 0.9302 |
| 0.0 | 20.0 | 4340 | 0.5538 | 0.9302 |
| 0.0 | 21.0 | 4557 | 0.5570 | 0.9302 |
| 0.0 | 22.0 | 4774 | 0.5610 | 0.9302 |
| 0.0 | 23.0 | 4991 | 0.5712 | 0.9302 |
| 0.0 | 24.0 | 5208 | 0.5753 | 0.9302 |
| 0.0 | 25.0 | 5425 | 0.5846 | 0.9302 |
| 0.0 | 26.0 | 5642 | 0.5887 | 0.9302 |
| 0.0 | 27.0 | 5859 | 0.5949 | 0.9302 |
| 0.0 | 28.0 | 6076 | 0.6007 | 0.9302 |
| 0.0 | 29.0 | 6293 | 0.6068 | 0.9302 |
| 0.0 | 30.0 | 6510 | 0.6184 | 0.9302 |
| 0.0 | 31.0 | 6727 | 0.6280 | 0.9302 |
| 0.0 | 32.0 | 6944 | 0.6394 | 0.9302 |
| 0.0 | 33.0 | 7161 | 0.6407 | 0.9302 |
| 0.0 | 34.0 | 7378 | 0.6480 | 0.9302 |
| 0.0 | 35.0 | 7595 | 0.6588 | 0.9302 |
| 0.0 | 36.0 | 7812 | 0.6700 | 0.9302 |
| 0.0 | 37.0 | 8029 | 0.6709 | 0.9302 |
| 0.0 | 38.0 | 8246 | 0.6850 | 0.9302 |
| 0.0 | 39.0 | 8463 | 0.6933 | 0.9302 |
| 0.0 | 40.0 | 8680 | 0.7079 | 0.9302 |
| 0.0 | 41.0 | 8897 | 0.7123 | 0.9302 |
| 0.0 | 42.0 | 9114 | 0.7231 | 0.9302 |
| 0.0 | 43.0 | 9331 | 0.7313 | 0.9302 |
| 0.0 | 44.0 | 9548 | 0.7417 | 0.9302 |
| 0.0 | 45.0 | 9765 | 0.7473 | 0.9302 |
| 0.0 | 46.0 | 9982 | 0.7513 | 0.9302 |
| 0.0 | 47.0 | 10199 | 0.7551 | 0.9302 |
| 0.0 | 48.0 | 10416 | 0.7564 | 0.9302 |
| 0.0 | 49.0 | 10633 | 0.7578 | 0.9302 |
| 0.0 | 50.0 | 10850 | 0.7579 | 0.9302 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
moose1108/test_adlfinal
|
moose1108
| 2023-12-25T15:19:51Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:yentinglin/Taiwan-LLM-7B-v2.0-chat",
"base_model:adapter:yentinglin/Taiwan-LLM-7B-v2.0-chat",
"region:us"
] | null | 2023-12-25T15:17:12Z |
---
library_name: peft
base_model: yentinglin/Taiwan-LLM-7B-v2.0-chat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0
## Training procedure
### Framework versions
- PEFT 0.6.0
## Training procedure
### Framework versions
- PEFT 0.6.0
## Training procedure
### Framework versions
- PEFT 0.6.0
## Training procedure
### Framework versions
- PEFT 0.6.0
|
ZhaoYoujia/autotrain-vit-base-v5
|
ZhaoYoujia
| 2023-12-25T15:19:09Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:autotrain-vit-base-v5/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T15:18:53Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- autotrain-vit-base-v5/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.16666666666666666
f1_micro: 0.3333333333333333
f1_weighted: 0.16666666666666666
precision_macro: 0.1111111111111111
precision_micro: 0.3333333333333333
precision_weighted: 0.1111111111111111
recall_macro: 0.3333333333333333
recall_micro: 0.3333333333333333
recall_weighted: 0.3333333333333333
accuracy: 0.3333333333333333
|
hkivancoral/hushem_40x_deit_tiny_rms_00001_fold4
|
hkivancoral
| 2023-12-25T15:15:43Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T14:44:44Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_rms_00001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9285714285714286
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_00001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4229
- Accuracy: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1412 | 1.0 | 219 | 0.1610 | 0.9286 |
| 0.013 | 2.0 | 438 | 0.1553 | 0.9524 |
| 0.0005 | 3.0 | 657 | 0.1135 | 0.9762 |
| 0.0002 | 4.0 | 876 | 0.2956 | 0.9286 |
| 0.0001 | 5.0 | 1095 | 0.1278 | 0.9762 |
| 0.0 | 6.0 | 1314 | 0.2416 | 0.9286 |
| 0.0031 | 7.0 | 1533 | 0.2692 | 0.9286 |
| 0.0 | 8.0 | 1752 | 0.1088 | 0.9524 |
| 0.0 | 9.0 | 1971 | 0.1134 | 0.9524 |
| 0.0 | 10.0 | 2190 | 0.1607 | 0.9524 |
| 0.0 | 11.0 | 2409 | 0.2098 | 0.9524 |
| 0.0 | 12.0 | 2628 | 0.2244 | 0.9524 |
| 0.0 | 13.0 | 2847 | 0.2259 | 0.9524 |
| 0.0 | 14.0 | 3066 | 0.2811 | 0.9524 |
| 0.0 | 15.0 | 3285 | 0.3300 | 0.9524 |
| 0.0 | 16.0 | 3504 | 0.3199 | 0.9524 |
| 0.0 | 17.0 | 3723 | 0.3615 | 0.9524 |
| 0.0 | 18.0 | 3942 | 0.4872 | 0.9524 |
| 0.0 | 19.0 | 4161 | 0.4327 | 0.9524 |
| 0.0 | 20.0 | 4380 | 0.4099 | 0.9524 |
| 0.0 | 21.0 | 4599 | 0.4211 | 0.9524 |
| 0.0 | 22.0 | 4818 | 0.3019 | 0.9524 |
| 0.0 | 23.0 | 5037 | 0.3473 | 0.9524 |
| 0.0 | 24.0 | 5256 | 0.3822 | 0.9524 |
| 0.0 | 25.0 | 5475 | 0.4512 | 0.9524 |
| 0.0 | 26.0 | 5694 | 0.3963 | 0.9524 |
| 0.0 | 27.0 | 5913 | 0.5056 | 0.9524 |
| 0.0 | 28.0 | 6132 | 0.4587 | 0.9524 |
| 0.0 | 29.0 | 6351 | 0.4379 | 0.9524 |
| 0.0 | 30.0 | 6570 | 0.4500 | 0.9524 |
| 0.0 | 31.0 | 6789 | 0.4166 | 0.9524 |
| 0.0 | 32.0 | 7008 | 0.3798 | 0.9524 |
| 0.0 | 33.0 | 7227 | 0.4566 | 0.9524 |
| 0.0 | 34.0 | 7446 | 0.3959 | 0.9524 |
| 0.0 | 35.0 | 7665 | 0.3429 | 0.9524 |
| 0.0 | 36.0 | 7884 | 0.3690 | 0.9524 |
| 0.0 | 37.0 | 8103 | 0.4056 | 0.9524 |
| 0.0 | 38.0 | 8322 | 0.4315 | 0.9286 |
| 0.0 | 39.0 | 8541 | 0.4336 | 0.9286 |
| 0.0 | 40.0 | 8760 | 0.4561 | 0.9524 |
| 0.0 | 41.0 | 8979 | 0.4723 | 0.9286 |
| 0.0 | 42.0 | 9198 | 0.3818 | 0.9286 |
| 0.0 | 43.0 | 9417 | 0.4220 | 0.9286 |
| 0.0 | 44.0 | 9636 | 0.4298 | 0.9286 |
| 0.0 | 45.0 | 9855 | 0.4315 | 0.9286 |
| 0.0 | 46.0 | 10074 | 0.4212 | 0.9286 |
| 0.0 | 47.0 | 10293 | 0.4170 | 0.9286 |
| 0.0 | 48.0 | 10512 | 0.4294 | 0.9286 |
| 0.0 | 49.0 | 10731 | 0.4253 | 0.9286 |
| 0.0 | 50.0 | 10950 | 0.4229 | 0.9286 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_GPT4_temp0_Seed101
|
behzadnet
| 2023-12-25T15:11:09Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-22T14:56:16Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
yuanhuaisen/autotrain-wo0g3-9eb7w
|
yuanhuaisen
| 2023-12-25T15:08:19Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:yuanhuaisen/autotrain-data-autotrain-wo0g3-9eb7w",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T15:07:46Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- yuanhuaisen/autotrain-data-autotrain-wo0g3-9eb7w
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.087890625
f1_macro: 0.26666666666666666
f1_micro: 0.5
f1_weighted: 0.4
precision_macro: 0.2222222222222222
precision_micro: 0.5
precision_weighted: 0.3333333333333333
recall_macro: 0.3333333333333333
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
AndyH96/q-Taxi-v3
|
AndyH96
| 2023-12-25T15:06:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-25T15:06:07Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AndyH96/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
iloncka/spnasnet_100.rmsp_in1k_ep_20
|
iloncka
| 2023-12-25T15:02:09Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-12-25T14:59:12Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
kuorell/q-FrozenLake-v1-4x4-noSlippery
|
kuorell
| 2023-12-25T14:59:47Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-25T14:59:44Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="kuorell/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hkivancoral/hushem_40x_deit_small_adamax_00001_fold1
|
hkivancoral
| 2023-12-25T14:50:35Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T14:33:24Z |
---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_small_adamax_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8444444444444444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_small_adamax_00001_fold1
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3015
- Accuracy: 0.8444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.238 | 1.0 | 215 | 0.7567 | 0.6667 |
| 0.0267 | 2.0 | 430 | 0.5445 | 0.7778 |
| 0.0051 | 3.0 | 645 | 0.6144 | 0.8 |
| 0.0012 | 4.0 | 860 | 0.6615 | 0.8 |
| 0.0005 | 5.0 | 1075 | 0.6553 | 0.8 |
| 0.0003 | 6.0 | 1290 | 0.6621 | 0.8222 |
| 0.0003 | 7.0 | 1505 | 0.6958 | 0.8222 |
| 0.0002 | 8.0 | 1720 | 0.7076 | 0.8222 |
| 0.0001 | 9.0 | 1935 | 0.7375 | 0.8222 |
| 0.0001 | 10.0 | 2150 | 0.7327 | 0.8222 |
| 0.0001 | 11.0 | 2365 | 0.7423 | 0.8222 |
| 0.0001 | 12.0 | 2580 | 0.7689 | 0.8 |
| 0.0001 | 13.0 | 2795 | 0.7876 | 0.8 |
| 0.0 | 14.0 | 3010 | 0.7990 | 0.8 |
| 0.0 | 15.0 | 3225 | 0.8203 | 0.8 |
| 0.0 | 16.0 | 3440 | 0.8447 | 0.8 |
| 0.0 | 17.0 | 3655 | 0.8558 | 0.8 |
| 0.0 | 18.0 | 3870 | 0.8774 | 0.8 |
| 0.0 | 19.0 | 4085 | 0.8896 | 0.8 |
| 0.0 | 20.0 | 4300 | 0.8965 | 0.8 |
| 0.0 | 21.0 | 4515 | 0.9254 | 0.8 |
| 0.0 | 22.0 | 4730 | 0.9318 | 0.8 |
| 0.0 | 23.0 | 4945 | 0.9571 | 0.8 |
| 0.0 | 24.0 | 5160 | 0.9711 | 0.8222 |
| 0.0 | 25.0 | 5375 | 0.9833 | 0.8222 |
| 0.0 | 26.0 | 5590 | 0.9915 | 0.8222 |
| 0.0 | 27.0 | 5805 | 1.0134 | 0.8222 |
| 0.0 | 28.0 | 6020 | 1.0327 | 0.8222 |
| 0.0 | 29.0 | 6235 | 1.0249 | 0.8222 |
| 0.0 | 30.0 | 6450 | 1.0679 | 0.8222 |
| 0.0 | 31.0 | 6665 | 1.0896 | 0.8222 |
| 0.0 | 32.0 | 6880 | 1.0990 | 0.8222 |
| 0.0 | 33.0 | 7095 | 1.1103 | 0.8222 |
| 0.0 | 34.0 | 7310 | 1.1167 | 0.8222 |
| 0.0 | 35.0 | 7525 | 1.1494 | 0.8222 |
| 0.0 | 36.0 | 7740 | 1.1474 | 0.8444 |
| 0.0 | 37.0 | 7955 | 1.1611 | 0.8444 |
| 0.0 | 38.0 | 8170 | 1.2104 | 0.8222 |
| 0.0 | 39.0 | 8385 | 1.1969 | 0.8444 |
| 0.0 | 40.0 | 8600 | 1.2127 | 0.8222 |
| 0.0 | 41.0 | 8815 | 1.2186 | 0.8444 |
| 0.0 | 42.0 | 9030 | 1.2356 | 0.8444 |
| 0.0 | 43.0 | 9245 | 1.2578 | 0.8444 |
| 0.0 | 44.0 | 9460 | 1.2543 | 0.8444 |
| 0.0 | 45.0 | 9675 | 1.2707 | 0.8222 |
| 0.0 | 46.0 | 9890 | 1.2807 | 0.8444 |
| 0.0 | 47.0 | 10105 | 1.2891 | 0.8444 |
| 0.0 | 48.0 | 10320 | 1.3057 | 0.8222 |
| 0.0 | 49.0 | 10535 | 1.3045 | 0.8444 |
| 0.0 | 50.0 | 10750 | 1.3015 | 0.8444 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
soonchang/ppo-SnowballTarget
|
soonchang
| 2023-12-25T14:47:56Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-12-25T14:47:50Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: soonchang/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hkivancoral/hushem_40x_deit_tiny_rms_00001_fold3
|
hkivancoral
| 2023-12-25T14:44:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T14:13:48Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_rms_00001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9302325581395349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_00001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4488
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0843 | 1.0 | 217 | 0.4280 | 0.8837 |
| 0.0143 | 2.0 | 434 | 0.2889 | 0.9302 |
| 0.0172 | 3.0 | 651 | 0.5423 | 0.9070 |
| 0.0189 | 4.0 | 868 | 1.1419 | 0.7907 |
| 0.0003 | 5.0 | 1085 | 0.4120 | 0.9302 |
| 0.0 | 6.0 | 1302 | 0.4870 | 0.9302 |
| 0.0 | 7.0 | 1519 | 0.5568 | 0.9070 |
| 0.0 | 8.0 | 1736 | 0.5757 | 0.8837 |
| 0.0 | 9.0 | 1953 | 0.6076 | 0.8837 |
| 0.0 | 10.0 | 2170 | 0.6516 | 0.8837 |
| 0.0 | 11.0 | 2387 | 0.6056 | 0.8837 |
| 0.0 | 12.0 | 2604 | 0.6691 | 0.8837 |
| 0.0 | 13.0 | 2821 | 0.6559 | 0.8837 |
| 0.0 | 14.0 | 3038 | 0.7098 | 0.9070 |
| 0.0 | 15.0 | 3255 | 0.6515 | 0.9070 |
| 0.0157 | 16.0 | 3472 | 0.6215 | 0.8837 |
| 0.0 | 17.0 | 3689 | 0.6307 | 0.8837 |
| 0.0 | 18.0 | 3906 | 0.7467 | 0.8837 |
| 0.0 | 19.0 | 4123 | 0.7677 | 0.8837 |
| 0.0 | 20.0 | 4340 | 0.7998 | 0.8605 |
| 0.0 | 21.0 | 4557 | 0.8197 | 0.8605 |
| 0.0 | 22.0 | 4774 | 0.8507 | 0.8605 |
| 0.0 | 23.0 | 4991 | 0.8634 | 0.8605 |
| 0.0 | 24.0 | 5208 | 0.8853 | 0.8605 |
| 0.0 | 25.0 | 5425 | 0.7783 | 0.9070 |
| 0.0 | 26.0 | 5642 | 0.7092 | 0.9302 |
| 0.0 | 27.0 | 5859 | 0.6309 | 0.9302 |
| 0.0 | 28.0 | 6076 | 0.6509 | 0.9302 |
| 0.0 | 29.0 | 6293 | 0.5569 | 0.9070 |
| 0.0 | 30.0 | 6510 | 0.5554 | 0.9302 |
| 0.0 | 31.0 | 6727 | 0.5595 | 0.9070 |
| 0.0 | 32.0 | 6944 | 0.5154 | 0.9302 |
| 0.0 | 33.0 | 7161 | 0.5043 | 0.9070 |
| 0.0 | 34.0 | 7378 | 0.5110 | 0.9535 |
| 0.0 | 35.0 | 7595 | 0.4416 | 0.9302 |
| 0.0 | 36.0 | 7812 | 0.4610 | 0.9535 |
| 0.0 | 37.0 | 8029 | 0.5159 | 0.9302 |
| 0.0 | 38.0 | 8246 | 0.5232 | 0.9302 |
| 0.0 | 39.0 | 8463 | 0.5109 | 0.9302 |
| 0.0 | 40.0 | 8680 | 0.4511 | 0.9535 |
| 0.0 | 41.0 | 8897 | 0.4620 | 0.9302 |
| 0.0 | 42.0 | 9114 | 0.4370 | 0.9302 |
| 0.0 | 43.0 | 9331 | 0.4660 | 0.9302 |
| 0.0 | 44.0 | 9548 | 0.4561 | 0.9302 |
| 0.0 | 45.0 | 9765 | 0.4386 | 0.9302 |
| 0.0 | 46.0 | 9982 | 0.4625 | 0.9302 |
| 0.0 | 47.0 | 10199 | 0.4505 | 0.9302 |
| 0.0 | 48.0 | 10416 | 0.4377 | 0.9302 |
| 0.0 | 49.0 | 10633 | 0.4484 | 0.9302 |
| 0.0 | 50.0 | 10850 | 0.4488 | 0.9302 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
ZhaoYoujia/autotrain-vit-base
|
ZhaoYoujia
| 2023-12-25T14:42:42Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:autotrain-vit-base/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T14:42:27Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- autotrain-vit-base/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 789.2529296875
f1_macro: 0.26666666666666666
f1_micro: 0.5
f1_weighted: 0.4
precision_macro: 0.2222222222222222
precision_micro: 0.5
precision_weighted: 0.3333333333333333
recall_macro: 0.3333333333333333
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
huggingfaceBing/1lama2-glora-finetunined-french
|
huggingfaceBing
| 2023-12-25T14:31:18Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-12-25T14:30:54Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
AndyH96/q-FrozenLake-v1-4x4-noSlippery
|
AndyH96
| 2023-12-25T14:24:43Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-25T14:19:56Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AndyH96/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hcy5561/emotion-analysis-with-distilbert
|
hcy5561
| 2023-12-25T14:24:27Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-25T14:19:45Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: hcy5561/emotion-analysis-with-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hcy5561/emotion-analysis-with-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1366
- Validation Loss: 0.1485
- Train Accuracy: 0.9335
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.4032 | 0.1602 | 0.933 | 0 |
| 0.1366 | 0.1485 | 0.9335 | 1 |
### Framework versions
- Transformers 4.33.0
- TensorFlow 2.12.0
- Datasets 2.16.0
- Tokenizers 0.13.3
|
hkivancoral/hushem_40x_deit_tiny_rms_00001_fold2
|
hkivancoral
| 2023-12-25T14:13:39Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T13:43:16Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_rms_00001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6888888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_00001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3474
- Accuracy: 0.6889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0591 | 1.0 | 215 | 0.9478 | 0.7111 |
| 0.0016 | 2.0 | 430 | 1.0737 | 0.6889 |
| 0.0003 | 3.0 | 645 | 1.0732 | 0.7111 |
| 0.0001 | 4.0 | 860 | 1.2338 | 0.7111 |
| 0.0 | 5.0 | 1075 | 1.3886 | 0.7111 |
| 0.0 | 6.0 | 1290 | 1.5328 | 0.6889 |
| 0.0 | 7.0 | 1505 | 1.6761 | 0.6889 |
| 0.0 | 8.0 | 1720 | 1.8802 | 0.6889 |
| 0.0 | 9.0 | 1935 | 2.1375 | 0.6889 |
| 0.0 | 10.0 | 2150 | 2.2804 | 0.6889 |
| 0.0 | 11.0 | 2365 | 2.5018 | 0.6667 |
| 0.0 | 12.0 | 2580 | 2.6034 | 0.7111 |
| 0.0 | 13.0 | 2795 | 2.1119 | 0.7556 |
| 0.0 | 14.0 | 3010 | 2.5118 | 0.7111 |
| 0.0 | 15.0 | 3225 | 2.4215 | 0.6889 |
| 0.0 | 16.0 | 3440 | 2.4416 | 0.6889 |
| 0.0 | 17.0 | 3655 | 2.4789 | 0.6889 |
| 0.0 | 18.0 | 3870 | 2.5530 | 0.6889 |
| 0.0 | 19.0 | 4085 | 2.6223 | 0.6889 |
| 0.0 | 20.0 | 4300 | 2.7198 | 0.6889 |
| 0.0 | 21.0 | 4515 | 2.8171 | 0.7111 |
| 0.0 | 22.0 | 4730 | 2.8585 | 0.7111 |
| 0.0 | 23.0 | 4945 | 2.8584 | 0.7111 |
| 0.0 | 24.0 | 5160 | 2.7240 | 0.7111 |
| 0.0 | 25.0 | 5375 | 2.6522 | 0.7111 |
| 0.0 | 26.0 | 5590 | 2.6766 | 0.7111 |
| 0.0 | 27.0 | 5805 | 2.6051 | 0.7333 |
| 0.0 | 28.0 | 6020 | 2.4780 | 0.7333 |
| 0.0 | 29.0 | 6235 | 2.4371 | 0.7333 |
| 0.0 | 30.0 | 6450 | 2.3680 | 0.7333 |
| 0.0 | 31.0 | 6665 | 2.3696 | 0.7111 |
| 0.0 | 32.0 | 6880 | 2.3638 | 0.7333 |
| 0.0 | 33.0 | 7095 | 2.3261 | 0.7333 |
| 0.0 | 34.0 | 7310 | 2.3611 | 0.7333 |
| 0.0 | 35.0 | 7525 | 2.3737 | 0.7333 |
| 0.0 | 36.0 | 7740 | 2.3371 | 0.6889 |
| 0.0 | 37.0 | 7955 | 2.3450 | 0.7111 |
| 0.0 | 38.0 | 8170 | 2.3727 | 0.6889 |
| 0.0 | 39.0 | 8385 | 2.3620 | 0.6889 |
| 0.0 | 40.0 | 8600 | 2.3928 | 0.6889 |
| 0.0 | 41.0 | 8815 | 2.3547 | 0.6889 |
| 0.0 | 42.0 | 9030 | 2.3935 | 0.6889 |
| 0.0 | 43.0 | 9245 | 2.3835 | 0.6889 |
| 0.0 | 44.0 | 9460 | 2.3407 | 0.6889 |
| 0.0 | 45.0 | 9675 | 2.3628 | 0.6889 |
| 0.0 | 46.0 | 9890 | 2.3464 | 0.6889 |
| 0.0 | 47.0 | 10105 | 2.3571 | 0.6889 |
| 0.0 | 48.0 | 10320 | 2.3604 | 0.6889 |
| 0.0 | 49.0 | 10535 | 2.3495 | 0.6889 |
| 0.0 | 50.0 | 10750 | 2.3474 | 0.6889 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
tfyxj/autotrain-mkp2u-20ss0
|
tfyxj
| 2023-12-25T14:09:55Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:tfyxj/autotrain-data-autotrain-mkp2u-20ss0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T14:09:29Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- tfyxj/autotrain-data-autotrain-mkp2u-20ss0
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.03333333333333333
f1_micro: 0.1111111111111111
f1_weighted: 0.02222222222222222
precision_macro: 0.018518518518518517
precision_micro: 0.1111111111111111
precision_weighted: 0.012345679012345678
recall_macro: 0.16666666666666666
recall_micro: 0.1111111111111111
recall_weighted: 0.1111111111111111
accuracy: 0.1111111111111111
|
soonchang/SpaceInvadersNoFrameskip-v4
|
soonchang
| 2023-12-25T14:09:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-25T14:09:02Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 514.50 +/- 151.29
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga soonchang -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga soonchang -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga soonchang
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
yuanhuaisen/autotrain-yg0zn-6y4s5
|
yuanhuaisen
| 2023-12-25T13:56:23Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:yuanhuaisen/autotrain-data-autotrain-yg0zn-6y4s5",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T13:55:45Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- yuanhuaisen/autotrain-data-autotrain-yg0zn-6y4s5
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.0810546875
f1_macro: 0.26666666666666666
f1_micro: 0.5
f1_weighted: 0.4
precision_macro: 0.2222222222222222
precision_micro: 0.5
precision_weighted: 0.3333333333333333
recall_macro: 0.3333333333333333
recall_micro: 0.5
recall_weighted: 0.5
accuracy: 0.5
|
ntc-ai/SDXL-LoRA-slider.gucci
|
ntc-ai
| 2023-12-25T13:47:08Z | 141 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-25T13:47:04Z |
---
language:
- en
thumbnail: "images/evaluate/gucci.../gucci_17_3.0.png"
widget:
- text: gucci
output:
url: images/gucci_17_3.0.png
- text: gucci
output:
url: images/gucci_19_3.0.png
- text: gucci
output:
url: images/gucci_20_3.0.png
- text: gucci
output:
url: images/gucci_21_3.0.png
- text: gucci
output:
url: images/gucci_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "gucci"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - gucci (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/gucci_17_-3.0.png" width=256 height=256 /> | <img src="images/gucci_17_0.0.png" width=256 height=256 /> | <img src="images/gucci_17_3.0.png" width=256 height=256 /> |
| <img src="images/gucci_19_-3.0.png" width=256 height=256 /> | <img src="images/gucci_19_0.0.png" width=256 height=256 /> | <img src="images/gucci_19_3.0.png" width=256 height=256 /> |
| <img src="images/gucci_20_-3.0.png" width=256 height=256 /> | <img src="images/gucci_20_0.0.png" width=256 height=256 /> | <img src="images/gucci_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
gucci
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.gucci', weight_name='gucci.safetensors', adapter_name="gucci")
# Activate the LoRA
pipe.set_adapters(["gucci"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, gucci"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 610+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
hkivancoral/hushem_40x_deit_tiny_rms_00001_fold1
|
hkivancoral
| 2023-12-25T13:43:07Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T13:13:18Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_rms_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8444444444444444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_00001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4542
- Accuracy: 0.8444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0572 | 1.0 | 215 | 0.5452 | 0.8222 |
| 0.0213 | 2.0 | 430 | 0.8514 | 0.8222 |
| 0.0002 | 3.0 | 645 | 1.1716 | 0.7778 |
| 0.0001 | 4.0 | 860 | 1.1956 | 0.8 |
| 0.0 | 5.0 | 1075 | 1.3312 | 0.7778 |
| 0.0 | 6.0 | 1290 | 1.3747 | 0.8 |
| 0.0 | 7.0 | 1505 | 1.5420 | 0.7778 |
| 0.0 | 8.0 | 1720 | 1.5431 | 0.7778 |
| 0.0 | 9.0 | 1935 | 1.6767 | 0.7778 |
| 0.0 | 10.0 | 2150 | 1.6620 | 0.8 |
| 0.0141 | 11.0 | 2365 | 0.7752 | 0.8667 |
| 0.0 | 12.0 | 2580 | 1.2616 | 0.7556 |
| 0.0 | 13.0 | 2795 | 1.1161 | 0.8667 |
| 0.0 | 14.0 | 3010 | 1.1254 | 0.8444 |
| 0.0 | 15.0 | 3225 | 1.1188 | 0.8889 |
| 0.0 | 16.0 | 3440 | 1.1820 | 0.8889 |
| 0.0 | 17.0 | 3655 | 1.2564 | 0.8889 |
| 0.0 | 18.0 | 3870 | 1.3559 | 0.8889 |
| 0.0 | 19.0 | 4085 | 1.4292 | 0.8667 |
| 0.0 | 20.0 | 4300 | 1.5164 | 0.8667 |
| 0.0 | 21.0 | 4515 | 1.5191 | 0.8667 |
| 0.0 | 22.0 | 4730 | 1.4544 | 0.8667 |
| 0.0 | 23.0 | 4945 | 1.4836 | 0.8667 |
| 0.0 | 24.0 | 5160 | 1.5747 | 0.8222 |
| 0.0 | 25.0 | 5375 | 1.5707 | 0.8222 |
| 0.0 | 26.0 | 5590 | 1.5222 | 0.8222 |
| 0.0 | 27.0 | 5805 | 1.4844 | 0.8667 |
| 0.0 | 28.0 | 6020 | 1.4898 | 0.8667 |
| 0.0 | 29.0 | 6235 | 1.5381 | 0.8444 |
| 0.0 | 30.0 | 6450 | 1.5320 | 0.8222 |
| 0.0 | 31.0 | 6665 | 1.5518 | 0.8222 |
| 0.0 | 32.0 | 6880 | 1.4681 | 0.8667 |
| 0.0 | 33.0 | 7095 | 1.5245 | 0.8444 |
| 0.0 | 34.0 | 7310 | 1.4517 | 0.8667 |
| 0.0 | 35.0 | 7525 | 1.4519 | 0.8667 |
| 0.0 | 36.0 | 7740 | 1.4734 | 0.8667 |
| 0.0 | 37.0 | 7955 | 1.5324 | 0.8444 |
| 0.0 | 38.0 | 8170 | 1.4772 | 0.8444 |
| 0.0 | 39.0 | 8385 | 1.4506 | 0.8444 |
| 0.0 | 40.0 | 8600 | 1.4509 | 0.8444 |
| 0.0 | 41.0 | 8815 | 1.5306 | 0.8444 |
| 0.0 | 42.0 | 9030 | 1.4735 | 0.8444 |
| 0.0 | 43.0 | 9245 | 1.4585 | 0.8444 |
| 0.0 | 44.0 | 9460 | 1.4843 | 0.8444 |
| 0.0 | 45.0 | 9675 | 1.4519 | 0.8444 |
| 0.0 | 46.0 | 9890 | 1.4772 | 0.8444 |
| 0.0 | 47.0 | 10105 | 1.4373 | 0.8444 |
| 0.0 | 48.0 | 10320 | 1.4662 | 0.8444 |
| 0.0 | 49.0 | 10535 | 1.4530 | 0.8444 |
| 0.0 | 50.0 | 10750 | 1.4542 | 0.8444 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
robonspace/ppo-LunarLander-v2
|
robonspace
| 2023-12-25T13:38:46Z | 2 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-25T13:38:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.51 +/- 24.81
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yuanhuaisen/autotrain-1pboy-weoon
|
yuanhuaisen
| 2023-12-25T13:37:48Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:yuanhuaisen/autotrain-data-autotrain-1pboy-weoon",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T13:37:12Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- yuanhuaisen/autotrain-data-autotrain-1pboy-weoon
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.07177734375
f1_macro: 0.5555555555555555
f1_micro: 0.6666666666666666
f1_weighted: 0.5555555555555555
precision_macro: 0.5
precision_micro: 0.6666666666666666
precision_weighted: 0.5
recall_macro: 0.6666666666666666
recall_micro: 0.6666666666666666
recall_weighted: 0.6666666666666666
accuracy: 0.6666666666666666
|
NoCrypt/fast-repo
|
NoCrypt
| 2023-12-25T13:30:51Z | 0 | 13 | null |
[
"code",
"en",
"region:us"
] | null | 2022-12-09T05:03:42Z |
---
language:
- en
tags:
- code
---
# This is what powered almost all of my colab
Mostly uses LZ4 compression, which means you'll need a specialized program to extract it, especially in windows.
For Windows users, I recommend using [7zip-zstd](https://github.com/mcmilk/7-Zip-zstd/releases/latest) (it's 7zip but with lz4 support and more)
For Linux users, use tar with liblz4-tool like this: `tar -xI lz4 -f repo.tar.lz4`
|
TaiQuach/detr-resnet-50_finetuned_cppe5
|
TaiQuach
| 2023-12-25T13:14:07Z | 29 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-12-25T10:32:56Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
soumyasikun/GODcreator_v1.0_epqpr8902k-inpainting
|
soumyasikun
| 2023-12-25T13:11:16Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"v8",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-25T12:33:32Z |
---
license: creativeml-openrail-m
---
<b>The recommended negative prompt:</b><br>
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck<br>
<b>OR</b><br>
(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation
<b>Recommended parameters for generation:</b><br>
Euler A or DPM++ SDE Karras<br>
CFG Scale 3,5 - 15<br>
Hires. fix with 4x-UltraSharp upscaler<br>
0 Hires steps and Denoising strength 0.25-0.7<br>
Upscale by 1.1-2.0
|
iloncka/regnety_002.pycls_in1k_ep_20
|
iloncka
| 2023-12-25T13:10:47Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2023-12-25T13:07:49Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
Thaweewat/typhoon-7b-chat-pobpad
|
Thaweewat
| 2023-12-25T13:10:06Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"th",
"dataset:Thaweewat/pobpad",
"base_model:TheBloke/typhoon-7B-GPTQ",
"base_model:adapter:TheBloke/typhoon-7B-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-12-25T09:14:27Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/typhoon-7B-GPTQ
model_creator: SCB 10X
model_name: Typhoon 7B
model_type: mistral
model-index:
- name: typhoon-7b-chat-alpaca
results: []
datasets:
- Thaweewat/pobpad
language:
- th
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# typhoon-7b-chat-alpaca
This model is a fine-tuned version of [TheBloke/typhoon-7B-GPTQ](https://huggingface.co/TheBloke/typhoon-7B-GPTQ) on the [Pobpad](https://huggingface.co/datasets/Thaweewat/pobpad) dataset.
> **_Experimental:_** This experimental model is not suitable for real medical use.
> It can hallucinate and generate dangerous answers. Further medical evaluation is needed.
## Usage
```python
from peft import AutoPeftModelForCausalLM
from transformers import GenerationConfig, AutoTokenizer
import torch
import time
def generate_response(input_text: str) -> str:
"""
Generate a response for the given input text using the Typhoon-7B model.
Parameters:
input_text (str): The input text prompt.
Returns:
str: The generated response.
"""
# Initialize the tokenizer and model only once
tokenizer = AutoTokenizer.from_pretrained("Thaweewat/typhoon-7b-chat-pobpad")
model = AutoPeftModelForCausalLM.from_pretrained(
"Thaweewat/typhoon-7b-chat-pobpad",
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map="cuda")
generation_config = GenerationConfig(
do_sample=True,
top_k=1,
temperature=0.4, # After a few experiment I found that between 0.3-0.4 seem to generate well
max_new_tokens=300,
repetition_penalty=1.1,
pad_token_id=tokenizer.eos_token_id)
# Tokenize input
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
# Generate outputs
st_time = time.time()
outputs = model.generate(**inputs, generation_config=generation_config)
# Decode and print response
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Response time: {time.time() - st_time} seconds")
return response
# Sample usage:
# Example from https://pantip.com/topic/42422619
input_text = """###Human: สวัสดีค่า มาตามหาอาหารเสริม คุณเเม่อายุ50+ค่ะ
ทำงานเดินทั้งวันเเถมพักผ่อนน้อยมีบางช่วงที่ดูไม่ค่อยสดใส อยากให้ทุกคนช่วยเเนะนำอาหารเสริมหน่อยค่า
###Assistant: """
print(generate_response(input_text))
"""
อาหารเสริมสำหรับผู้สูงอายุ ควรเลือกทานที่มีวิตามินและแร่ธาตุครบถ้วน เช่น วิตามินบีรวม วิตามินซี แคลเซียม
แมกนีเซียม เหล็ก โฟเลต เป็นต้น ซึ่งจะช่วยให้ร่างกายแข็งแรงขึ้น และควรหลีกเลี่ยงการรับประทานอาหารประเภทไขมันสูง
เพราะอาจทำให้เกิดโรคหัวใจได้หากต้องการทราบข้อมูลเพิ่มเติม สามารถสอบถามเภสัชกรหรือแพทย์ประจำตัวเพื่อขอคำแนะนำในการดูแลสุขภาพก่อนนะคะ
ขอเป็นกำลังใจให้นะคะ หากมีข้อสงสัยสามารถปรึกษาพยาบาลสายด่วน โทร.1669 ได้ตลอดเวลาค่ะ
*หมายเหตุ : การใช้ยาและการปรับเปลี่ยนพฤติกรรมต่างๆ ควรอยู่ภายใต้การดูแลของบุคลากรทางการแพทย์*
"""
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
Realgon/N_roberta_agnews_padding0model
|
Realgon
| 2023-12-25T13:04:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-25T11:01:16Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
datasets:
- ag_news
metrics:
- accuracy
model-index:
- name: N_roberta_agnews_padding0model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ag_news
type: ag_news
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9501315789473684
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# N_roberta_agnews_padding0model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5421
- Accuracy: 0.9501
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.1929 | 1.0 | 7500 | 0.2180 | 0.9363 |
| 0.1646 | 2.0 | 15000 | 0.2092 | 0.9455 |
| 0.1502 | 3.0 | 22500 | 0.2136 | 0.9478 |
| 0.1217 | 4.0 | 30000 | 0.2395 | 0.9476 |
| 0.1008 | 5.0 | 37500 | 0.2357 | 0.9501 |
| 0.0789 | 6.0 | 45000 | 0.3286 | 0.9420 |
| 0.0625 | 7.0 | 52500 | 0.3378 | 0.9439 |
| 0.0546 | 8.0 | 60000 | 0.4044 | 0.9443 |
| 0.0434 | 9.0 | 67500 | 0.4361 | 0.9412 |
| 0.0321 | 10.0 | 75000 | 0.4044 | 0.9453 |
| 0.0254 | 11.0 | 82500 | 0.4670 | 0.9455 |
| 0.0302 | 12.0 | 90000 | 0.4657 | 0.9438 |
| 0.0224 | 13.0 | 97500 | 0.4942 | 0.9432 |
| 0.0085 | 14.0 | 105000 | 0.5315 | 0.9449 |
| 0.0053 | 15.0 | 112500 | 0.5283 | 0.9455 |
| 0.01 | 16.0 | 120000 | 0.5004 | 0.9466 |
| 0.0061 | 17.0 | 127500 | 0.5430 | 0.9458 |
| 0.0042 | 18.0 | 135000 | 0.5116 | 0.9486 |
| 0.0034 | 19.0 | 142500 | 0.5379 | 0.9491 |
| 0.0022 | 20.0 | 150000 | 0.5421 | 0.9501 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jonathanbb/distilbert-base-uncased-finetuned-edgar10pct
|
jonathanbb
| 2023-12-25T12:48:45Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-12-24T13:46:26Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-edgar10pct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-edgar10pct
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.8442
- eval_runtime: 172.5467
- eval_samples_per_second: 57.955
- eval_steps_per_second: 0.91
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
hkivancoral/hushem_40x_deit_tiny_rms_0001_fold5
|
hkivancoral
| 2023-12-25T12:30:19Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T11:58:59Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_rms_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8780487804878049
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_0001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8832
- Accuracy: 0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1509 | 1.0 | 220 | 0.5608 | 0.8537 |
| 0.0292 | 2.0 | 440 | 0.1504 | 0.9512 |
| 0.1009 | 3.0 | 660 | 0.7468 | 0.8537 |
| 0.011 | 4.0 | 880 | 0.6340 | 0.7805 |
| 0.0031 | 5.0 | 1100 | 0.8446 | 0.8293 |
| 0.0646 | 6.0 | 1320 | 1.0420 | 0.8537 |
| 0.0678 | 7.0 | 1540 | 0.6521 | 0.8293 |
| 0.0002 | 8.0 | 1760 | 1.1011 | 0.8537 |
| 0.0677 | 9.0 | 1980 | 1.2605 | 0.8049 |
| 0.0002 | 10.0 | 2200 | 0.4029 | 0.9024 |
| 0.0011 | 11.0 | 2420 | 0.5279 | 0.9512 |
| 0.0002 | 12.0 | 2640 | 0.5883 | 0.9268 |
| 0.0801 | 13.0 | 2860 | 1.0161 | 0.8293 |
| 0.0 | 14.0 | 3080 | 0.7618 | 0.9024 |
| 0.0 | 15.0 | 3300 | 0.7876 | 0.8293 |
| 0.0144 | 16.0 | 3520 | 0.6802 | 0.8780 |
| 0.0032 | 17.0 | 3740 | 0.2440 | 0.9268 |
| 0.0 | 18.0 | 3960 | 0.4384 | 0.8293 |
| 0.0 | 19.0 | 4180 | 0.6787 | 0.8537 |
| 0.0 | 20.0 | 4400 | 0.6527 | 0.8293 |
| 0.0 | 21.0 | 4620 | 0.6512 | 0.8537 |
| 0.0 | 22.0 | 4840 | 0.6749 | 0.8537 |
| 0.0 | 23.0 | 5060 | 0.6838 | 0.8537 |
| 0.0 | 24.0 | 5280 | 0.7554 | 0.8537 |
| 0.0 | 25.0 | 5500 | 0.8097 | 0.8780 |
| 0.0 | 26.0 | 5720 | 0.8183 | 0.8780 |
| 0.0 | 27.0 | 5940 | 0.8490 | 0.8780 |
| 0.0 | 28.0 | 6160 | 0.9053 | 0.8537 |
| 0.0 | 29.0 | 6380 | 0.9213 | 0.8537 |
| 0.0 | 30.0 | 6600 | 0.9237 | 0.8780 |
| 0.0 | 31.0 | 6820 | 0.9293 | 0.8537 |
| 0.0 | 32.0 | 7040 | 0.9309 | 0.8780 |
| 0.0 | 33.0 | 7260 | 0.9345 | 0.8780 |
| 0.0 | 34.0 | 7480 | 0.9273 | 0.8780 |
| 0.0 | 35.0 | 7700 | 0.9432 | 0.8780 |
| 0.0 | 36.0 | 7920 | 0.9371 | 0.8780 |
| 0.0 | 37.0 | 8140 | 0.9224 | 0.9024 |
| 0.0 | 38.0 | 8360 | 0.9410 | 0.8780 |
| 0.0 | 39.0 | 8580 | 0.9241 | 0.8780 |
| 0.0 | 40.0 | 8800 | 0.9144 | 0.8780 |
| 0.0 | 41.0 | 9020 | 0.9167 | 0.8780 |
| 0.0 | 42.0 | 9240 | 0.8992 | 0.8780 |
| 0.0 | 43.0 | 9460 | 0.9050 | 0.8780 |
| 0.0 | 44.0 | 9680 | 0.8956 | 0.8780 |
| 0.0 | 45.0 | 9900 | 0.8902 | 0.8780 |
| 0.0 | 46.0 | 10120 | 0.8925 | 0.8780 |
| 0.0 | 47.0 | 10340 | 0.8847 | 0.8780 |
| 0.0 | 48.0 | 10560 | 0.8839 | 0.8780 |
| 0.0 | 49.0 | 10780 | 0.8833 | 0.8780 |
| 0.0 | 50.0 | 11000 | 0.8832 | 0.8780 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ
|
TheBloke
| 2023-12-25T12:26:11Z | 21 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mistral",
"finetune",
"sft",
"dpo",
"chatml",
"augmentation",
"german",
"en",
"de",
"fr",
"it",
"es",
"dataset:Open-Orca/SlimOrca",
"dataset:argilla/distilabel-math-preference-dpo",
"base_model:VAGOsolutions/SauerkrautLM-Mixtral-8x7B",
"base_model:quantized:VAGOsolutions/SauerkrautLM-Mixtral-8x7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-12-25T10:36:10Z |
---
base_model: VAGOsolutions/SauerkrautLM-Mixtral-8x7B
datasets:
- Open-Orca/SlimOrca
- argilla/distilabel-math-preference-dpo
inference: false
language:
- en
- de
- fr
- it
- es
library_name: transformers
license: apache-2.0
model_creator: VAGO solutions
model_name: SauerkrautLM Mixtral 8X7B
model_type: mixtral
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- mistral
- finetune
- sft
- dpo
- chatml
- augmentation
- german
- mixtral
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SauerkrautLM Mixtral 8X7B - GPTQ
- Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions)
- Original model: [SauerkrautLM Mixtral 8X7B](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B)
<!-- description start -->
# Description
This repo contains GPTQ model files for [VAGO solutions's SauerkrautLM Mixtral 8X7B](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GGUF)
* [VAGO solutions's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 8192 | 23.81 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 8192 | 24.70 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 8192 | 27.42 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 8192 | 18.85 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 8192 | 21.43 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 8192 | 47.04 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 8192 | 48.10 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `SauerkrautLM-Mixtral-8x7B-GPTQ`:
```shell
mkdir SauerkrautLM-Mixtral-8x7B-GPTQ
huggingface-cli download TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ --local-dir SauerkrautLM-Mixtral-8x7B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir SauerkrautLM-Mixtral-8x7B-GPTQ
huggingface-cli download TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir SauerkrautLM-Mixtral-8x7B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir SauerkrautLM-Mixtral-8x7B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ --local-dir SauerkrautLM-Mixtral-8x7B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `SauerkrautLM-Mixtral-8x7B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: VAGO solutions's SauerkrautLM Mixtral 8X7B

## VAGO solutions SauerkrautLM-Mixtral-8x7B
Introducing **SauerkrautLM-Mixtral-8x7B** – our Sauerkraut version of the powerful Mixtral-8x7B!
Finetuned and aligned with **SFT** and **DPO**
# Table of Contents
1. [Overview of all SauerkrautLM-Mixtral models](#all-sauerkrautlm-mixtral-models)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training Dataset](#training-dataset)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-Mixtral Models
| Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-Mixtral-8x7B | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B) | coming soon | coming soon | coming soon |
| SauerkrautLM-Mixtral-8x7B-Instruct | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct) | coming soon | coming soon | coming soon |
## Model Details
**SauerkrautLM-Mixtral-8x7B**
- **Model Type:** SauerkrautLM-Mixtral-8x7B is a Mixture of Experts (MoE) Model based on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
- **Language(s):** English, German, French, Italian, Spanish
- **License:** APACHE 2.0
- **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:golchinfar@vago-solutions.de)
### Training Dataset:
SauerkrautLM-Mixtral-8x7B was trained with mix of German data augmentation and translated data.
**SFT** with the dataset[OpenOrca/Slim-Orca](https://huggingface.co/datasets/Open-Orca/SlimOrca) and aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset
as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional **translated Parts of the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)** and **[argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).**
We found, that only a simple translation of training data can lead to unnatural German phrasings.
Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
### Data Contamination Test Results
Some models on the HuggingFace leaderboard had problems with wrong data getting mixed in.
We checked our SauerkrautLM-DPO dataset with a special test [1] on a smaller model for this problem.
The HuggingFace team used the same methods [2, 3].
Our results, with `result < 0.1, %:` being well below 0.9, indicate that our dataset is free from contamination.
*The data contamination test results of HellaSwag and Winograde will be added once [1] supports them.*
| Dataset | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **SauerkrautLM-DPO**| result < 0.1, %: 0.0 |result < 0.1, %: 0.09 | result < 0.1, %: 0.13 | result < 0.1, %: 0.16 |
[1] https://github.com/swj0419/detect-pretrain-code-contamination
[2] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06
[3] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230
### Prompt Template:
```
<|im_start|>system
Du bist ein großes Sprachmodell, das höflich und kompetent antwortet. Schreibe deine Gedanken Schritt für Schritt auf, um Probleme sinnvoll zu lösen.<|im_end|>
<|im_start|>user
Wie geht es dir?<|im_end|>
<|im_start|>assistant
```
## Evaluation

*evaluated with lm-evaluation-harness v0.3.0 - mmlu coming soon
*All benchmarks were performed with a sliding window of 4096. New Benchmarks with Sliding Window null coming soon
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the Apache 2.0 remains applicable and is included with the model files.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:vaziri@vago-solutions.de). We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
## Acknowledgement
Many thanks to [OpenOrca](https://huggingface.co/Open-Orca), [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to MistralAI for providing the open source community with their latest technology!
|
ZDPLI/Pyramid-PPO-v1
|
ZDPLI
| 2023-12-25T12:13:11Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-12-25T11:53:46Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ZDPLI/Pyramid-PPO-v1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
exontidev/SISUS_ADAPTERS_SIKERS
|
exontidev
| 2023-12-25T12:07:26Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ai-forever/rugpt3large_based_on_gpt2",
"base_model:adapter:ai-forever/rugpt3large_based_on_gpt2",
"region:us"
] | null | 2023-12-19T16:27:46Z |
---
library_name: peft
base_model: ai-forever/rugpt3large_based_on_gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
internlm/internlm-xcomposer-7b
|
internlm
| 2023-12-25T12:03:58Z | 1,253 | 20 |
transformers
|
[
"transformers",
"pytorch",
"InternLMXComposer",
"feature-extraction",
"text-generation",
"custom_code",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-09-26T03:39:53Z |
---
license: apache-2.0
pipeline_tag: text-generation
---
<p align="center">
<img src="logo.png" width="400"/>
<p>
<p align="center">
<b><font size="6">InternLM-XComposer</font></b>
<p>
<div align="center">
[💻Github Repo](https://github.com/InternLM/InternLM-XComposer)
</div>
**InternLM-XComposer** is a vision-language large model (VLLM) based on [InternLM](https://github.com/InternLM/InternLM/tree/main) for advanced text-image comprehension and composition. InternLM-XComposer has serveal appealing properties:
- **Interleaved Text-Image Composition**: InternLM-XComposer can effortlessly generate coherent and contextual articles that seamlessly integrate images, providing a more engaging and immersive reading experience. The interleaved text-image composition is implemented in following steps:
1. **Text Generation**: It crafts long-form text based on human-provided instructions.
2. **Image Spoting and Captioning**: It pinpoints optimal locations for image placement and furnishes image descriptions.
3. **Image Retrieval and Selection**: It select image candidates and identify the image that optimally complements the content.
- **Comprehension with Rich Multilingual Knowledge**: The text-image comprehension is empowered by training on extensive multi-modal multilingual concepts with carefully crafted strategies, resulting in a deep understanding of visual content.
- **Strong performance**: It consistently achieves state-of-the-art results across various benchmarks for vision-language large models, including [MME Benchmark](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation) (English), [MMBench](https://opencompass.org.cn/leaderboard-multimodal) (English), [Seed-Bench](https://huggingface.co/spaces/AILab-CVC/SEED-Bench_Leaderboard) (English), [CCBench](https://opencompass.org.cn/leaderboard-multimodal)(Chinese), and [MMBench-CN](https://opencompass.org.cn/leaderboard-multimodal) (Chineese).
We release InternLM-XComposer series in two versions:
- InternLM-XComposer-VL: The pretrained VLLM model with InternLM as the initialization of the LLM, achieving strong performance on various multimodal benchmarks, e.g., MME Benchmark, MMBench Seed-Bench, CCBench, and MMBench-CN.
- InternLM-XComposer: The finetuned VLLM for *Interleaved Text-Image Composition* and *LLM-based AI assistant*.
<br>
|
GebeyaTalent/mpnetwithoutchunking
|
GebeyaTalent
| 2023-12-25T12:01:40Z | 4 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"dataset:eli5",
"dataset:snli",
"dataset:multi_nli",
"dataset:wikihow",
"dataset:natural_questions",
"dataset:trivia_qa",
"dataset:embedding-data/sentence-compression",
"dataset:embedding-data/flickr30k-captions",
"dataset:embedding-data/altlex",
"dataset:embedding-data/simple-wiki",
"dataset:embedding-data/QQP",
"dataset:embedding-data/SPECTER",
"dataset:embedding-data/PAQ_pairs",
"dataset:embedding-data/WikiAnswers",
"arxiv:1904.06472",
"arxiv:2102.07033",
"arxiv:2104.08727",
"arxiv:1704.05179",
"arxiv:1810.09305",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-25T11:58:59Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
language: en
license: apache-2.0
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-data/simple-wiki
- embedding-data/QQP
- embedding-data/SPECTER
- embedding-data/PAQ_pairs
- embedding-data/WikiAnswers
---
# all-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v2)
------
## Background
The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the
[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
organized by Hugging Face. We developped this model as part of the project:
[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
By default, input text longer than 384 word pieces is truncated.
## Training procedure
### Pre-training
We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model. Please refer to the model card for more detailed information about the pre-training procedure.
### Fine-tuning
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
We then apply the cross entropy loss by comparing with true pairs.
#### Hyper parameters
We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
#### Training data
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
| Dataset | Paper | Number of training tuples |
|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
| **Total** | | **1,170,060,424** |
|
juliozhao/hadpo-llava-1.5
|
juliozhao
| 2023-12-25T11:59:59Z | 44 | 0 |
peft
|
[
"peft",
"llava",
"region:us"
] | null | 2023-12-25T11:57:29Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
hkivancoral/hushem_40x_deit_tiny_rms_0001_fold4
|
hkivancoral
| 2023-12-25T11:58:50Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T11:28:01Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_rms_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9523809523809523
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_0001_fold4
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4037
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1838 | 1.0 | 219 | 0.4926 | 0.8571 |
| 0.0446 | 2.0 | 438 | 0.2754 | 0.9286 |
| 0.0295 | 3.0 | 657 | 0.9751 | 0.8810 |
| 0.0096 | 4.0 | 876 | 0.1123 | 0.9762 |
| 0.0055 | 5.0 | 1095 | 0.3687 | 0.9048 |
| 0.0033 | 6.0 | 1314 | 0.3076 | 0.9524 |
| 0.0283 | 7.0 | 1533 | 0.8089 | 0.8571 |
| 0.0044 | 8.0 | 1752 | 0.2435 | 0.9286 |
| 0.0018 | 9.0 | 1971 | 0.7038 | 0.8571 |
| 0.0191 | 10.0 | 2190 | 0.5242 | 0.9048 |
| 0.0001 | 11.0 | 2409 | 0.8130 | 0.9286 |
| 0.0007 | 12.0 | 2628 | 0.6030 | 0.9048 |
| 0.0189 | 13.0 | 2847 | 0.5406 | 0.9048 |
| 0.0002 | 14.0 | 3066 | 0.6774 | 0.8571 |
| 0.0018 | 15.0 | 3285 | 0.6982 | 0.9286 |
| 0.0001 | 16.0 | 3504 | 0.3877 | 0.9524 |
| 0.0008 | 17.0 | 3723 | 0.6996 | 0.8810 |
| 0.0 | 18.0 | 3942 | 0.5507 | 0.9286 |
| 0.0 | 19.0 | 4161 | 0.3796 | 0.9524 |
| 0.0001 | 20.0 | 4380 | 0.3967 | 0.9286 |
| 0.0 | 21.0 | 4599 | 0.4081 | 0.9286 |
| 0.0 | 22.0 | 4818 | 0.3898 | 0.9286 |
| 0.0 | 23.0 | 5037 | 0.3709 | 0.9286 |
| 0.0 | 24.0 | 5256 | 0.3640 | 0.9524 |
| 0.0 | 25.0 | 5475 | 0.3789 | 0.9524 |
| 0.0 | 26.0 | 5694 | 0.3987 | 0.9286 |
| 0.0 | 27.0 | 5913 | 0.4326 | 0.9286 |
| 0.0 | 28.0 | 6132 | 0.4566 | 0.9286 |
| 0.0 | 29.0 | 6351 | 0.4673 | 0.9286 |
| 0.0 | 30.0 | 6570 | 0.4642 | 0.9286 |
| 0.0 | 31.0 | 6789 | 0.4534 | 0.9286 |
| 0.0 | 32.0 | 7008 | 0.4388 | 0.9286 |
| 0.0 | 33.0 | 7227 | 0.4268 | 0.9286 |
| 0.0 | 34.0 | 7446 | 0.4182 | 0.9286 |
| 0.0 | 35.0 | 7665 | 0.4134 | 0.9286 |
| 0.0 | 36.0 | 7884 | 0.4102 | 0.9286 |
| 0.0 | 37.0 | 8103 | 0.4079 | 0.9286 |
| 0.0 | 38.0 | 8322 | 0.4066 | 0.9286 |
| 0.0 | 39.0 | 8541 | 0.4041 | 0.9286 |
| 0.0 | 40.0 | 8760 | 0.4048 | 0.9286 |
| 0.0 | 41.0 | 8979 | 0.4034 | 0.9524 |
| 0.0 | 42.0 | 9198 | 0.4032 | 0.9524 |
| 0.0 | 43.0 | 9417 | 0.4038 | 0.9524 |
| 0.0 | 44.0 | 9636 | 0.4040 | 0.9524 |
| 0.0 | 45.0 | 9855 | 0.4040 | 0.9524 |
| 0.0 | 46.0 | 10074 | 0.4038 | 0.9524 |
| 0.0 | 47.0 | 10293 | 0.4038 | 0.9524 |
| 0.0 | 48.0 | 10512 | 0.4039 | 0.9524 |
| 0.0 | 49.0 | 10731 | 0.4037 | 0.9524 |
| 0.0 | 50.0 | 10950 | 0.4037 | 0.9524 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
yuanhuaisen/autotrain-c9zbz-0tb92
|
yuanhuaisen
| 2023-12-25T11:57:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:yuanhuaisen/autotrain-data-autotrain-c9zbz-0tb92",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T11:56:54Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- yuanhuaisen/autotrain-data-autotrain-c9zbz-0tb92
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.11805555555555557
f1_micro: 0.21518987341772153
f1_weighted: 0.07621308016877638
precision_macro: 0.07172995780590717
precision_micro: 0.21518987341772153
precision_weighted: 0.04630668162153501
recall_macro: 0.3333333333333333
recall_micro: 0.21518987341772153
recall_weighted: 0.21518987341772153
accuracy: 0.21518987341772153
|
Privan/ppo-SnowballTarget
|
Privan
| 2023-12-25T11:32:17Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-12-25T11:23:47Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Privan/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hkivancoral/hushem_40x_deit_tiny_rms_0001_fold3
|
hkivancoral
| 2023-12-25T11:27:47Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-25T10:57:18Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_rms_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.813953488372093
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_rms_0001_fold3
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8398
- Accuracy: 0.8140
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1282 | 1.0 | 217 | 0.8975 | 0.8372 |
| 0.0209 | 2.0 | 434 | 0.8245 | 0.7674 |
| 0.0218 | 3.0 | 651 | 0.3670 | 0.9302 |
| 0.0205 | 4.0 | 868 | 1.2586 | 0.8372 |
| 0.0008 | 5.0 | 1085 | 0.8797 | 0.7907 |
| 0.0606 | 6.0 | 1302 | 1.1984 | 0.8605 |
| 0.1313 | 7.0 | 1519 | 1.3827 | 0.8372 |
| 0.0283 | 8.0 | 1736 | 0.8068 | 0.8605 |
| 0.0037 | 9.0 | 1953 | 1.0055 | 0.8837 |
| 0.0058 | 10.0 | 2170 | 1.7904 | 0.8140 |
| 0.0074 | 11.0 | 2387 | 1.3591 | 0.8140 |
| 0.0197 | 12.0 | 2604 | 1.3843 | 0.8605 |
| 0.0 | 13.0 | 2821 | 1.1075 | 0.8837 |
| 0.0155 | 14.0 | 3038 | 1.0442 | 0.8837 |
| 0.0002 | 15.0 | 3255 | 1.5088 | 0.8605 |
| 0.0288 | 16.0 | 3472 | 0.6806 | 0.8605 |
| 0.0057 | 17.0 | 3689 | 0.9450 | 0.8837 |
| 0.0 | 18.0 | 3906 | 1.1935 | 0.8372 |
| 0.0 | 19.0 | 4123 | 1.2605 | 0.8605 |
| 0.0 | 20.0 | 4340 | 1.0286 | 0.8140 |
| 0.0001 | 21.0 | 4557 | 0.9245 | 0.8605 |
| 0.0039 | 22.0 | 4774 | 1.3627 | 0.8372 |
| 0.0 | 23.0 | 4991 | 1.4994 | 0.8605 |
| 0.0001 | 24.0 | 5208 | 1.2134 | 0.7907 |
| 0.0001 | 25.0 | 5425 | 1.0301 | 0.8372 |
| 0.0 | 26.0 | 5642 | 1.0457 | 0.8837 |
| 0.0 | 27.0 | 5859 | 1.2728 | 0.8140 |
| 0.0 | 28.0 | 6076 | 1.0821 | 0.8837 |
| 0.0 | 29.0 | 6293 | 1.1243 | 0.8837 |
| 0.0 | 30.0 | 6510 | 1.1728 | 0.8837 |
| 0.0 | 31.0 | 6727 | 1.2386 | 0.8605 |
| 0.0 | 32.0 | 6944 | 1.3089 | 0.8605 |
| 0.0 | 33.0 | 7161 | 1.3713 | 0.8605 |
| 0.0 | 34.0 | 7378 | 1.4458 | 0.8605 |
| 0.0 | 35.0 | 7595 | 1.5096 | 0.8605 |
| 0.0 | 36.0 | 7812 | 1.5439 | 0.8605 |
| 0.0 | 37.0 | 8029 | 1.5992 | 0.8605 |
| 0.0 | 38.0 | 8246 | 1.6228 | 0.8605 |
| 0.0 | 39.0 | 8463 | 1.6686 | 0.8372 |
| 0.0 | 40.0 | 8680 | 1.7133 | 0.8372 |
| 0.0 | 41.0 | 8897 | 1.7502 | 0.8372 |
| 0.0 | 42.0 | 9114 | 1.7750 | 0.8372 |
| 0.0 | 43.0 | 9331 | 1.7947 | 0.8372 |
| 0.0 | 44.0 | 9548 | 1.8093 | 0.8372 |
| 0.0 | 45.0 | 9765 | 1.8201 | 0.8372 |
| 0.0 | 46.0 | 9982 | 1.8280 | 0.8372 |
| 0.0 | 47.0 | 10199 | 1.8337 | 0.8372 |
| 0.0 | 48.0 | 10416 | 1.8373 | 0.8372 |
| 0.0 | 49.0 | 10633 | 1.8394 | 0.8372 |
| 0.0 | 50.0 | 10850 | 1.8398 | 0.8140 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
imalexianne/xlm-roberta-large_latest_Nov2023
|
imalexianne
| 2023-12-25T11:27:27Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-13T09:03:31Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-large_latest_Nov2023
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large_latest_Nov2023
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3474
- Accuracy: 0.7735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6171 | 0.2 | 100 | 0.5548 | 0.569 |
| 0.5233 | 0.4 | 200 | 0.4284 | 0.715 |
| 0.4572 | 0.6 | 300 | 0.4136 | 0.7185 |
| 0.4347 | 0.8 | 400 | 0.4087 | 0.7065 |
| 0.4379 | 1.0 | 500 | 0.4107 | 0.7275 |
| 0.4285 | 1.2 | 600 | 0.4007 | 0.7285 |
| 0.3897 | 1.4 | 700 | 0.3986 | 0.7315 |
| 0.3862 | 1.6 | 800 | 0.3536 | 0.76 |
| 0.3575 | 1.8 | 900 | 0.3506 | 0.762 |
| 0.3247 | 2.0 | 1000 | 0.3474 | 0.7735 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Bashiro/ppo-SnowballTarget
|
Bashiro
| 2023-12-25T11:19:59Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-12-25T11:19:56Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Bashiro/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.