modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 00:41:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 00:41:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
aroot/eng-fra-simcse_random_ssrl
|
aroot
| 2023-07-07T23:06:31Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T22:51:26Z |
---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-fra-simcse_random_ssrl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-fra-simcse_random_ssrl
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1462
- Bleu: 31.7089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ytung/q-FrozenLake-v1-4x4-noSlippery
|
ytung
| 2023-07-07T23:02:23Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-20T22:52:20Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ytung/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
trevorj/q-FrozenLake-v1-4x4-noSlippery
|
trevorj
| 2023-07-07T22:48:01Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T22:47:59Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="trevorj/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zhuyunlong455/gh
|
zhuyunlong455
| 2023-07-07T22:30:10Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-07-07T22:30:10Z |
---
license: bigcode-openrail-m
---
|
NasimB/gpt2-concat-cbt-rarity-all-no-cbt-7k-p8k
|
NasimB
| 2023-07-07T22:29:05Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-07T19:38:15Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-cbt-rarity-all-no-cbt-7k-p8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-cbt-rarity-all-no-cbt-7k-p8k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7276 | 0.29 | 500 | 5.6334 |
| 5.3681 | 0.59 | 1000 | 5.2101 |
| 5.0257 | 0.88 | 1500 | 4.9484 |
| 4.7492 | 1.17 | 2000 | 4.7957 |
| 4.586 | 1.47 | 2500 | 4.6672 |
| 4.476 | 1.76 | 3000 | 4.5561 |
| 4.3388 | 2.05 | 3500 | 4.4815 |
| 4.1528 | 2.35 | 4000 | 4.4321 |
| 4.1203 | 2.64 | 4500 | 4.3692 |
| 4.08 | 2.93 | 5000 | 4.3225 |
| 3.8694 | 3.23 | 5500 | 4.3200 |
| 3.8133 | 3.52 | 6000 | 4.2856 |
| 3.8082 | 3.81 | 6500 | 4.2543 |
| 3.6896 | 4.11 | 7000 | 4.2513 |
| 3.5344 | 4.4 | 7500 | 4.2450 |
| 3.5282 | 4.69 | 8000 | 4.2309 |
| 3.5178 | 4.99 | 8500 | 4.2181 |
| 3.346 | 5.28 | 9000 | 4.2297 |
| 3.3387 | 5.57 | 9500 | 4.2294 |
| 3.3317 | 5.87 | 10000 | 4.2282 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mpetrikov/CartPole-v1
|
mpetrikov
| 2023-07-07T22:17:46Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T22:17:35Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 499.70 +/- 0.90
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Khushnur/t5-base-end2end-questions-generation_eli_squad_single_exp
|
Khushnur
| 2023-07-07T22:17:13Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-07T20:33:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-end2end-questions-generation_eli_squad_single_exp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-end2end-questions-generation_eli_squad_single_exp
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4297 | 0.25 | 100 | 2.7250 |
| 2.2459 | 0.5 | 200 | 2.7337 |
| 2.2066 | 0.74 | 300 | 2.7301 |
| 2.1867 | 0.99 | 400 | 2.7186 |
| 2.1046 | 1.24 | 500 | 2.7268 |
| 2.1003 | 1.49 | 600 | 2.7269 |
| 2.0799 | 1.74 | 700 | 2.7222 |
| 2.0852 | 1.99 | 800 | 2.7238 |
| 2.0323 | 2.23 | 900 | 2.7258 |
| 2.0297 | 2.48 | 1000 | 2.7252 |
| 2.0451 | 2.73 | 1100 | 2.7230 |
| 2.0208 | 2.98 | 1200 | 2.7241 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/dit-small_tobacco3482_kd_MSE
|
jordyvl
| 2023-07-07T22:14:26Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-07T22:00:56Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-small_tobacco3482_kd_MSE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-small_tobacco3482_kd_MSE
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.7275
- Accuracy: 0.21
- Brier Loss: 0.8834
- Nll: 6.7677
- F1 Micro: 0.2100
- F1 Macro: 0.1146
- Ece: 0.2647
- Aurc: 0.7666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 7.1014 | 0.06 | 0.9055 | 7.9056 | 0.06 | 0.0114 | 0.1732 | 0.9050 |
| No log | 1.96 | 6 | 6.9659 | 0.125 | 0.8970 | 10.1253 | 0.125 | 0.0631 | 0.2010 | 0.8465 |
| No log | 2.96 | 9 | 6.8528 | 0.075 | 0.8954 | 7.0315 | 0.075 | 0.0258 | 0.1912 | 0.8871 |
| No log | 3.96 | 12 | 6.8522 | 0.205 | 0.8955 | 7.0990 | 0.205 | 0.0776 | 0.2426 | 0.7588 |
| No log | 4.96 | 15 | 6.8465 | 0.19 | 0.8959 | 7.1340 | 0.19 | 0.0627 | 0.2308 | 0.7536 |
| No log | 5.96 | 18 | 6.8246 | 0.205 | 0.8937 | 7.1101 | 0.205 | 0.0867 | 0.2410 | 0.7354 |
| No log | 6.96 | 21 | 6.8054 | 0.085 | 0.8918 | 7.0215 | 0.085 | 0.0435 | 0.1847 | 0.8289 |
| No log | 7.96 | 24 | 6.8025 | 0.22 | 0.8879 | 6.8272 | 0.22 | 0.0967 | 0.2487 | 0.7438 |
| No log | 8.96 | 27 | 6.8045 | 0.21 | 0.8871 | 6.3740 | 0.2100 | 0.0992 | 0.2412 | 0.7634 |
| No log | 9.96 | 30 | 6.8013 | 0.22 | 0.8869 | 6.9538 | 0.22 | 0.1016 | 0.2495 | 0.7633 |
| No log | 10.96 | 33 | 6.7920 | 0.215 | 0.8865 | 6.9670 | 0.2150 | 0.0968 | 0.2549 | 0.7577 |
| No log | 11.96 | 36 | 6.7817 | 0.22 | 0.8867 | 6.9953 | 0.22 | 0.1004 | 0.2455 | 0.7437 |
| No log | 12.96 | 39 | 6.7729 | 0.17 | 0.8884 | 6.9738 | 0.17 | 0.0891 | 0.2277 | 0.7865 |
| No log | 13.96 | 42 | 6.7632 | 0.2 | 0.8873 | 6.9622 | 0.2000 | 0.0998 | 0.2393 | 0.7413 |
| No log | 14.96 | 45 | 6.7548 | 0.215 | 0.8860 | 6.9576 | 0.2150 | 0.1010 | 0.2635 | 0.7189 |
| No log | 15.96 | 48 | 6.7489 | 0.22 | 0.8857 | 6.8386 | 0.22 | 0.1024 | 0.2665 | 0.7098 |
| No log | 16.96 | 51 | 6.7457 | 0.23 | 0.8855 | 6.8730 | 0.23 | 0.1129 | 0.2506 | 0.7217 |
| No log | 17.96 | 54 | 6.7455 | 0.215 | 0.8864 | 6.8688 | 0.2150 | 0.1058 | 0.2576 | 0.7528 |
| No log | 18.96 | 57 | 6.7424 | 0.16 | 0.8861 | 6.8631 | 0.16 | 0.0843 | 0.2281 | 0.8036 |
| No log | 19.96 | 60 | 6.7380 | 0.155 | 0.8850 | 6.8443 | 0.155 | 0.0871 | 0.2315 | 0.7937 |
| No log | 20.96 | 63 | 6.7348 | 0.195 | 0.8841 | 6.7769 | 0.195 | 0.0949 | 0.2501 | 0.7799 |
| No log | 21.96 | 66 | 6.7317 | 0.175 | 0.8838 | 6.7692 | 0.175 | 0.1025 | 0.2421 | 0.7797 |
| No log | 22.96 | 69 | 6.7293 | 0.175 | 0.8836 | 6.7682 | 0.175 | 0.1012 | 0.2452 | 0.7799 |
| No log | 23.96 | 72 | 6.7281 | 0.205 | 0.8834 | 6.7672 | 0.205 | 0.1132 | 0.2566 | 0.7679 |
| No log | 24.96 | 75 | 6.7275 | 0.21 | 0.8834 | 6.7677 | 0.2100 | 0.1146 | 0.2647 | 0.7666 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
TheBloke/bloomz-176B-GPTQ
|
TheBloke
| 2023-07-07T22:03:59Z | 14 | 20 |
transformers
|
[
"transformers",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zu",
"dataset:bigscience/xP3",
"arxiv:2211.01786",
"license:bigscience-bloom-rail-1.0",
"model-index",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-05T12:49:41Z |
---
datasets:
- bigscience/xP3
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
pipeline_tag: text-generation
inference: false
widget:
- text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?"
example_title: "zh-en sentiment"
- text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?"
example_title: "zh-zh sentiment"
- text: "Suggest at least five related search terms to \"Mạng neural nhân tạo\"."
example_title: "vi-en query"
- text: "Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»."
example_title: "fr-fr query"
- text: "Explain in a sentence in Telugu what is backpropagation in neural networks."
example_title: "te-en qa"
- text: "Why is the sky blue?"
example_title: "en-en qa"
- text: "Explain to me in Traditional Chinese what is the difference between Bitcoin and Ethereum."
example_title: "zh-en qa"
- text: "Write a code snippet with O(log(n)) computational complexity."
example_title: "code-en"
- text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):"
example_title: "es-en fable"
- text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):"
example_title: "hi-en fable"
- text: "How many sides does a rectangle and heptagon have, when
combined? Answer this question with some math.
Ein Rechteck hat 4 Seiten. Ein Siebeneck hat 7 Seiten.
In Kombination haben sie 4 + 7 = 11 Seiten.
كم عدد الأضلاع التي يجمعها المربع والمثلث؟
Répondez à cette question en chinois."
example_title: "en-de-ar-fr-zh math"
model-index:
- name: bloomz
results:
- task:
type: Coreference resolution
dataset:
type: winogrande
name: Winogrande XL (xl)
config: xl
split: validation
revision: a80f460359d1e9a67c006011c94de42a8759430c
metrics:
- type: Accuracy
value: 59.27
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (en)
config: en
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 69.08
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (fr)
config: fr
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 68.67
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (jp)
config: jp
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 59.65
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (pt)
config: pt
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 64.26
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (ru)
config: ru
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 60.95
- task:
type: Coreference resolution
dataset:
type: Muennighoff/xwinograd
name: XWinograd (zh)
config: zh
split: test
revision: 9dd5ea5505fad86b7bedad667955577815300cee
metrics:
- type: Accuracy
value: 70.24
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r1)
config: r1
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 48.6
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r2)
config: r2
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 44.1
- task:
type: Natural language inference
dataset:
type: anli
name: ANLI (r3)
config: r3
split: validation
revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094
metrics:
- type: Accuracy
value: 45.5
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (cb)
config: cb
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 82.14
- task:
type: Natural language inference
dataset:
type: super_glue
name: SuperGLUE (rte)
config: rte
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 85.56
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ar)
config: ar
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 60.68
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (bg)
config: bg
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 48.43
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (de)
config: de
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.38
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (el)
config: el
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 47.43
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (en)
config: en
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 67.47
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (es)
config: es
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 61.24
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (fr)
config: fr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 61.37
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (hi)
config: hi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 60.2
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ru)
config: ru
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 54.02
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (sw)
config: sw
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 52.09
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (th)
config: th
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 43.78
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (tr)
config: tr
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 45.7
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (ur)
config: ur
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 50.8
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (vi)
config: vi
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 61.0
- task:
type: Natural language inference
dataset:
type: xnli
name: XNLI (zh)
config: zh
split: validation
revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16
metrics:
- type: Accuracy
value: 56.91
- task:
type: Program synthesis
dataset:
type: openai_humaneval
name: HumanEval
config: None
split: test
revision: e8dc562f5de170c54b5481011dd9f4fa04845771
metrics:
- type: Pass@1
value: 12.06
- type: Pass@10
value: 26.53
- type: Pass@100
value: 48.44
- task:
type: Sentence completion
dataset:
type: story_cloze
name: StoryCloze (2016)
config: "2016"
split: validation
revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db
metrics:
- type: Accuracy
value: 96.26
- task:
type: Sentence completion
dataset:
type: super_glue
name: SuperGLUE (copa)
config: copa
split: validation
revision: 9e12063561e7e6c79099feb6d5a493142584e9e2
metrics:
- type: Accuracy
value: 91.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (et)
config: et
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 51.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ht)
config: ht
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 58.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (id)
config: id
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 86.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (it)
config: it
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 74.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (qu)
config: qu
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 56.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (sw)
config: sw
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 64.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (ta)
config: ta
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 69.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (th)
config: th
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 58.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (tr)
config: tr
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 57.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (vi)
config: vi
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 87.0
- task:
type: Sentence completion
dataset:
type: xcopa
name: XCOPA (zh)
config: zh
split: validation
revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187
metrics:
- type: Accuracy
value: 90.0
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ar)
config: ar
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 92.79
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (es)
config: es
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 94.37
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (eu)
config: eu
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 86.9
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (hi)
config: hi
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 88.42
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (id)
config: id
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 92.12
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (my)
config: my
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 52.35
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (ru)
config: ru
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 81.73
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (sw)
config: sw
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 79.81
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (te)
config: te
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 81.2
- task:
type: Sentence completion
dataset:
type: Muennighoff/xstory_cloze
name: XStoryCloze (zh)
config: zh
split: validation
revision: 8bb76e594b68147f1a430e86829d07189622b90d
metrics:
- type: Accuracy
value: 93.12
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# BigScience's BLOOMZ 176B GPTQ
These files are GPTQ 4bit model files for [BigScience's BLOOMZ](https://huggingface.co/bigscience/bloomz).
It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
**This is a BIG model! 2 x 80GB or 3 x 48GB GPUs are required**
## Important note: files must be joined before use
It is not currently possible to shard GPTQ files, therefore the model file is one single 94 GB `safetensors` file.
Huggingface Hub has a 50GB per-file limit. I have therefore been forced to split the file in to three parts for upload.
I did this using the simple *nix command `split`.
To join the files on any *nix system, run:
```
cat gptq_model-4bit--1g.JOINBEFOREUSE.split-*.safetensors > gptq_model-4bit--1g.safetensors
```
To join the files on Windows, open a Command Prompt and run:
```
COPY /B gptq_model-4bit--1g.JOINBEFOREUSE.split-a.safetensors + gptq_model-4bit--1g.JOINBEFOREUSE.split-b.safetensors + gptq_model-4bit--1g.JOINBEFOREUSE.split-c.safetensors gptq_model-4bit--1g.safetensors
```
Or for Python code for joining the files, see the Python section below.
The SHA256SUM of the joined file will be:
```
50baeab9859362d22df6f822f158b9ba75b44ffc6605b715992fe6245aa6e93a gptq_model-4bit--1g.safetensors
```
Once you have the joined file, you can safely delete `gptq_model-4bit--1g.JOINBEFOREUSE.split-*.safetensors`.
## Repositories available
* [4-bit GPTQ model for GPU inference](https://huggingface.co/TheBlokeAI/bloomz-175B-GPTQ)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bigscience/bloomz)
## Two files provided - separate branches
- Main branch: `gptq_model-4bit--1g.safetensors`
- Group Size = None
- Desc Act (act-order) = True
- This version will use the least possible VRAM, and should have higher inference performance in CUDA mode
- Branch `group_size_128g`: `gptq_model-4bit-128g.safetensors`
- Group Size = 128g
- Desc Act (act-oder) = True
- This version will use more VRAM, which shouldn't be a problem as it shouldn't exceed 2 x 80GB or 3 x 48GB cards.
- However CUDA inference performance is likely to be a lot slower, possibly necessitating the use of Triton mode.
By default you will download the first file, unless you choose to download from branch `group_size_128g`.
## Prompt template: none
```
Translate to English: Je t’aime.
Translation:
```
## How to easily download and use this model in text-generation-webui
Please make sure you're using the latest version of text-generation-webui.
Note 1: this is a non-Llama model which cannot be used with ExLlama. Use Loader: AutoGPTQ.
Note 2: As described above, you must join the files after downloading and before loading in text-generation-webui.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/bloomz-176B-GPTQ`.
- If you would rather download the group_size 128g version, enter `TheBloke/bloomz-176B-GPTQ:group_size_128g`
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done". This is a huge model so it may take a while!
5. Now follow the steps described above to join the model to get a single `.safetensors` file.
6. Untick **Autoload model**
7. In the top left, click the refresh icon next to **Model**.
8. In the **Model** dropdown, choose the model you just downloaded: `bloomz-176B-GPTQ`
9. Make sure Loader is set to AutGPTQ.
10. This model cannot load on one GPU, so you should set **GPU Memory** accordingly.
- If using two 80GB GPUs, try: GPU0 = 60GB, GPU1 = 79GB
- If using three 48GB GPUs, try: GPU0 = 30GB, GPU1 = 47GB, GPU2 = 47GB
11. Click **Save settings** to save your settings, and then **Reload** to load the model.
12. The model will load, and is now ready for use!
13. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Because this model has to be joined locally, you must first download it. Example download code:
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="TheBloke/bloomz-176B-GPTQ",
local_dir="/workspace/models/bloomz-176GB-GPTQ",
local_dir_use_symlinks=False)
```
If you want to download the group_size 128g file instead, add `revision="group_size_128g"` to the above command.
Now join the three `split` files, which can be done with the following Python code:
```python
import glob
# Get the list of all files matching the pattern
files = sorted(glob.glob('gptq_model-4bit--1g.JOINBEFOREUSE.split-*.safetensors'))
# Open the output file in binary write mode
with open('gptq_model-4bit--1g.safetensors', 'wb') as outfile:
for filename in files:
with open(filename, 'rb') as infile:
outfile.write(infile.read())
```
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import argparse
# Use the local path you downloaded the model to and joined the split files in
model_name_or_path = "/workspace/models/bloomz-176GB-GPTQ"
model_basename = "gptq_model-4bit--1g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
max_memory={0: '60GiB', 1: '79GiB'} # max_memory is for 2 x 80GB GPUs; adjust if your config is different!
use_safetensors=True,
trust_remote_code=False,
use_triton=use_triton,
quantize_config=None)
prompt = "Translate this to French: AI is the future of computing"
prompt_template=f'''{prompt}
Translation:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Provided files
## Main branch:
**gptq_model-4bit--1g.safetensors**
This will work with AutoGPTQ. It is untested with GPTQ-for-LLaMa. It will *not* work with ExLlama.
It was created with group_size none (-1) to reduce VRAM usage, and with --act-order (desc_act) to improve accuracy of responses.
* `gptq_model-4bit-128g.safetensors`
* Works with AutoGPTQ in CUDA or Triton modes.
* Does NOT work with [ExLlama](https://github.com/turboderp/exllama) as it's not a Llama model.
* Untested with GPTQ-for-LLaMa.
* Works with text-generation-webui, including one-click-installers.
* Parameters: Groupsize = -1. Act Order / desc_act = True.
## Branch `group_size_128g`
**gptq_model-4bit-128g.safetensors**
This will work with AutoGPTQ. It is untested with GPTQ-for-LLaMa. It will *not* work with ExLlama.
It was created with both group_size 128g and --act-order (desc_act) for even higher inference accuracy, at the cost of increased VRAM usage. Because we already need 2 x 80GB or 3 x 48GB GPUs, I don't expect the increased VRAM usage to change the GPU requirements.
**Note** Using group_size + desc_act together can significantly lower performance in AutoGPTQ CUDA. You might want to try AutoGPTQ Triton mode instead (Linux only.)
* `gptq_model-4bit-128g.safetensors`
* Works with AutoGPTQ in CUDA or Triton modes.
* Does NOT work with [ExLlama](https://github.com/turboderp/exllama) as it's not a Llama model.
* Untested with GPTQ-for-LLaMa.
* Works with text-generation-webui, including one-click-installers.
* Parameters: Groupsize = 128. Act Order / desc_act = True.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: BigScience's BLOOMZ 176B

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages.
- **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf)
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
- **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages.
- **BLOOMZ & mT0 Model Family:**
<div class="max-w-full overflow-auto">
<table>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English.
</tr>
<tr>
<td>Parameters</td>
<td>300M</td>
<td>580M</td>
<td>1.2B</td>
<td>3.7B</td>
<td>13B</td>
<td>560M</td>
<td>1.1B</td>
<td>1.7B</td>
<td>3B</td>
<td>7.1B</td>
<td>176B</td>
</tr>
<tr>
<td>Finetuned Model</td>
<td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
</tr>
<tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td>
</tr>
<th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th>
</tr>
<tr>
<td>Finetuned Model</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td>
</tr>
<th colspan="12">Original pretrained checkpoints. Not recommended.</th>
<tr>
<td>Pretrained Model</td>
<td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td>
<td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td>
<td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td>
<td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td>
<td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td>
<td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td>
<td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td>
</tr>
</table>
</div>
# Use
## Intended use
We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper:
- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?
- Suggest at least five related search terms to "Mạng neural nhân tạo".
- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):
- Explain in a sentence in Telugu what is backpropagation in neural networks.
**Feel free to share your generations in the Community tab!**
## How to use
### CPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto")
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
### GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install -q transformers accelerate bitsandbytes
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigscience/bloomz"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True)
inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
</details>
<!-- Necessary for whitespace -->
###
# Limitations
**Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*".
# Training
## Model
- **Architecture:** Same as [bloom](https://huggingface.co/bigscience/bloom), also refer to the `config.json` file
- **Finetuning steps:** 498
- **Finetuning tokens:** 2.09 billion
- **Finetuning layout:** 72x pipeline parallel, 1x tensor parallel, 4x data parallel
- **Precision:** bfloat16
## Hardware
- **CPUs:** AMD CPUs with 512GB memory per node
- **GPUs:** 288 A100 80GB GPUs with 8 GPUs per node (36 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links
- **Communication:** NCCL-communications network with a fully dedicated subnet
## Software
- **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed)
- **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# Evaluation
We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config.
# Citation
```bibtex
@article{muennighoff2022crosslingual,
title={Crosslingual generalization through multitask finetuning},
author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others},
journal={arXiv preprint arXiv:2211.01786},
year={2022}
}
```
|
TheBloke/BLOOMChat-176B-v1-GPTQ
|
TheBloke
| 2023-07-07T22:03:31Z | 13 | 31 |
transformers
|
[
"transformers",
"bloom",
"text-generation",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-05T16:33:58Z |
---
license: other
inference: false
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Sambanova Systems' BLOOMChat 1.0
These files are GPTQ 4bit model files for [Sambanova Systems' BLOOMChat 1.0](https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1).
It is the result of quantising to 4-bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
**This is a BIG model! 2 x 80GB or 3 x 48GB GPUs are required**
## Important note: files must be joined before use
It is not currently possible to shard GPTQ files, therefore the model file is one single 94GB `safetensors` file.
Huggingface Hub has a 50GB per-file limit. I have therefore been forced to split the file in to three parts for upload.
I did this using the simple *nix command `split`.
To join the files on any *nix system, you can run:
```
cat gptq_model-4bit--1g.JOINBEFOREUSE.split-*.safetensors > gptq_model-4bit--1g.safetensors
```
To join the files on Windows, open a Command Prompt and run:
```
COPY /B gptq_model-4bit--1g.JOINBEFOREUSE.split-a.safetensors + gptq_model-4bit--1g.JOINBEFOREUSE.split-b.safetensors + gptq_model-4bit--1g.JOINBEFOREUSE.split-c.safetensors gptq_model-4bit--1g.safetensors
```
Or for Python code for joining the files, see the Python section below.
The SHA256SUM of the joined file will be:
```
9cc359fa266d2523566e818ca58e8782718b25cc2e714cb5449b7841e1c59830 gptq_model-4bit--1g.safetensors
```
Once you have the joined file, you can safely delete `gptq_model-4bit--1g.JOINBEFOREUSE.split-*.safetensors`.
## Repositories available
* [4-bit GPTQ model for GPU inference](https://huggingface.co/TheBloke/BLOOMChat-176B-v1-GPTQ)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1)
## Two files provided - separate branches
- Main branch: `gptq_model-4bit--1g.safetensors`
- Group Size = None
- Desc Act (act-order) = True
- This version will use the least possible VRAM, and should have higher inference performance in CUDA mode
- Branch `group_size_128g`: `gptq_model-4bit-128g.safetensors`
- Group Size = 128g
- Desc Act (act-oder) = True
- This version will use more VRAM, which shouldn't be a problem as it shouldn't exceed 2 x 80GB or 3 x 48GB cards.
- However CUDA inference performance is likely to be a lot slower, possibly necessitating the use of Triton mode.
By default you will download the first file, unless you choose to download from branch `group_size_128g`.
## Prompt template:
```
<human>: prompt
<bot>:
```
## How to easily download and use this model in text-generation-webui
Please make sure you're using the latest version of text-generation-webui.
Note 1: this is a non-Llama model which cannot be used with ExLlama. Use Loader: AutoGPTQ.
Note 2: As described above, you must join the files after downloading and before loading in text-generation-webui.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/BLOOMChat-176B-v1-GPTQ`.
- If you would rather download the group_size 128g version, enter `TheBloke/BLOOMChat-176B-v1-GPTQ:group_size_128g`
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done". This is a huge model so it may take a while!
5. Now follow the steps described above to join the model to get a single `.safetensors` file.
6. Untick **Autoload model**.
7. In the top left, click the refresh icon next to **Model**.
8. In the **Model** dropdown, choose the model you just downloaded: `BLOOMChat-176B-v1-GPTQ`
9. Make sure Loader is set to AutGPTQ.
10. This model cannot load on one GPU, so you should set **GPU Memory** accordingly.
- If using two 80GB GPUs, try: GPU0 = 60GB, GPU1 = 79GB
- If using three 48GB GPUs, try: GPU0 = 30GB, GPU1 = 47GB, GPU2 = 47GB
11. Click **Save settings** to save your settings, and then **Reload** to load the model.
12. The model will load, and is now ready for use!
13. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Because this model has to be joined locally, you must first download it. Example download code:
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="TheBloke/BLOOMChat-176B-v1-GPTQ",
local_dir="/workspace/models/BLOOMChat-176B-v1-GPTQ",
local_dir_use_symlinks=False)
```
If you want to download the group_size 128g file instead, add `revision="group_size_128g"` to the above command.
Now join the three `split` files, which can be done with the following Python code:
```python
import glob
# Get the list of all files matching the pattern
files = sorted(glob.glob('gptq_model-4bit--1g.JOINBEFOREUSE.split-*.safetensors'))
# Open the output file in binary write mode
with open('gptq_model-4bit--1g.safetensors', 'wb') as outfile:
for filename in files:
with open(filename, 'rb') as infile:
outfile.write(infile.read())
```
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import argparse
# Use the local path you downloaded the model to and joined the split files in
model_name_or_path = "/workspace/models/BLOOMChat-176B-v1-GPTQ"
model_basename = "gptq_model-4bit--1g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
max_memory={0: '60GiB', 1: '79GiB'} # max_memory is for 2 x 80GB GPUs; adjust if your config is different!
use_safetensors=True,
trust_remote_code=False,
use_triton=use_triton,
quantize_config=None)
prompt = "Write a story about llamas"
prompt_template=f'''<human>: {prompt}
<bot>:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Provided files
## Main branch:
**gptq_model-4bit--1g.safetensors**
This will work with AutoGPTQ. It is untested with GPTQ-for-LLaMa. It will *not* work with ExLlama.
It was created with group_size none (-1) to reduce VRAM usage, and with --act-order (desc_act) to improve accuracy of responses.
* `gptq_model-4bit-128g.safetensors`
* Works with AutoGPTQ in CUDA or Triton modes.
* Does NOT work with [ExLlama](https://github.com/turboderp/exllama) as it's not a Llama model.
* Untested with GPTQ-for-LLaMa.
* Works with text-generation-webui, including one-click-installers.
* Parameters: Groupsize = -1. Act Order / desc_act = True.
## Branch `group_size_128g`
**gptq_model-4bit-128g.safetensors**
This will work with AutoGPTQ. It is untested with GPTQ-for-LLaMa. It will *not* work with ExLlama.
It was created with both group_size 128g and --act-order (desc_act) for even higher inference accuracy, at the cost of increased VRAM usage. Because we already need 2 x 80GB or 3 x 48GB GPUs, I don't expect the increased VRAM usage to change the GPU requirements.
* `gptq_model-4bit-128g.safetensors`
* Works with AutoGPTQ in CUDA or Triton modes.
* Does NOT work with [ExLlama](https://github.com/turboderp/exllama) as it's not a Llama model.
* Untested with GPTQ-for-LLaMa.
* Works with text-generation-webui, including one-click-installers.
* Parameters: Groupsize = 128. Act Order / desc_act = True.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Sambanova Systems' BLOOMChat V1.0
# BLOOMChat V1.0
<!-- Provide a quick summary of what the model is/does. -->
BLOOMChat is a 176 billion parameter multilingual chat model. It is instruction tuned from [BLOOM (176B)](https://huggingface.co/bigscience/bloom) on assistant-style conversation datasets and supports conversation, question answering and generative answers in multiple languages.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Co-developed by:** [Together Computer](https://www.together.xyz/)
- **Model type:** Language Model
- **Language(s):** Multiple; see [training data from BLOOM](https://huggingface.co/bigscience/bloom#training-data)
- **License:** BLOOMChat-176B LICENSE v1.0
- **Instruction Tuned from model:** [BigScience Group BLOOM](https://huggingface.co/bigscience/bloom)
### Basic Information
<!-- Provide the basic links for the model. -->
- **Blog Post**: [Link](https://sambanova.ai/blog/introducing-bloomchat-176b-the-multilingual-chat-based-llm/)
- **Discord**: [Link](https://discord.com/invite/8z2Pe7cpRv)
- **HF Hosting**: [Chat with me!](https://huggingface.co/spaces/sambanovasystems/BLOOMChat)
- **Github**: [Link](https://github.com/sambanova/bloomchat)
### Licensing
To increase accessibility and to support the open-source community, SambaNova is releasing BLOOMChat under a modified version of the Apache 2.0 license, which includes use-based restrictions from BLOOM’s RAIL license. While use-based restrictions are necessarily passed through, there are no blanket restrictions on reuse, distribution, commercialization or adaptation. [Please review SambaNova’s BLOOMChat-176B License](LICENSE)
## Uses
<details>
<summary>Click to expand</summary>
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model is intended for commercial and research use.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
BLOOMChat should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
- Important automated pipelines
This model is still in early development and can be prone to mistakes and hallucinations, there is still room for improvement. This model is intended to provide the community with a multilingual chat LLM baseline.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases, limitations, and restrictions of the model, which are listed down at the bottom of the page.
</details>
---
## How to Get Started with the Model
<details>
<summary>Click to expand</summary>
### Loading in model with Huggingface
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/BLOOMChat-176B-v1")
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/BLOOMChat-176B-v1", device_map="auto", torch_dtype="auto")
```
### Quick Start Inference on SambaNova's in-house Reconfigurable Dataflow Unit (RDU)
The inference code to run the model can be found our [github repo](https://github.com/sambanova/bloomchat/blob/main/rdu_quick_start/inference.py). This code requires the [SambaFlow](https://docs.sambanova.ai/developer/latest/sambaflow-intro.html) SDK to execute. For those interested in running models on RDUs, [please feel free to get in touch](https://sambanova.ai/getstarted).
### Quick Start Inference on GPU
First create a python virtual environment for these packages
```
python3 -m venv bloomchat_venv
source bloomchat_venv/bin/activate
pip install --upgrade pip
```
<!-- Please follow this section [Inference solutions for BLOOM 176B](https://github.com/huggingface/transformers-bloom-inference#bloom-inference-via-command-line) in the Huggingface Tutorial for environment set up and stop before the [BLOOM inference via command-line
](https://github.com/huggingface/transformers-bloom-inference#bloom-inference-via-command-line) section. -->
```
pip install flask flask_api gunicorn pydantic accelerate huggingface_hub>=0.9.0 deepspeed>=0.7.3 deepspeed-mii==0.0.2
```
And then
```
pip install transformers==4.27.0
```
You will see messages like this
```
ERROR: deepspeed-mii 0.0.2 has requirement transformers==4.21.2, but you'll have transformers 4.27.0 which is incompatible.
Installing collected packages: transformers
Found existing installation: transformers 4.21.2
Uninstalling transformers-4.21.2:
Successfully uninstalled transformers-4.21.2
Successfully installed transformers-4.27.0
```
Now let's git clone the [huggingface/transformers-bloom-inference](https://github.com/huggingface/transformers-bloom-inference) repo.
```
git clone https://github.com/huggingface/transformers-bloom-inference.git
cd transformers-bloom-inference/
```
And then you need to modify two files in this [transformers-bloom-inference](https://github.com/huggingface/transformers-bloom-inference) repo:
- Modifying `inference_server/models/hf_accelerate.py`
- This is because for our testing of this repo we used 4 80GB A100 GPUs and would run into memory issues
- Modifying `inference_server/cli.py`
- This is because the model was trained using specific human, bot tags
- Trailing spaces may lead to subpar performance
Modifications for `inference_server/models/hf_accelerate.py`:
```diff
diff --git a/inference_server/models/hf_accelerate.py b/inference_server/models/hf_accelerate.py
index 9be3c3f..a8ecb1d 100644
--- a/inference_server/models/hf_accelerate.py
+++ b/inference_server/models/hf_accelerate.py
@@ -1,4 +1,5 @@
from argparse import Namespace
+from accelerate.utils.modeling import get_max_memory
import torch
@@ -12,6 +13,12 @@ class HFAccelerateModel(Model):
kwargs = {"pretrained_model_name_or_path": args.model_name, "device_map": "auto"}
+ original_max_memory_dict = get_max_memory()
+
+ reduce_max_memory_dict = {device_key: int(original_max_memory_dict[device_key] * 0.85) for device_key in original_max_memory_dict}
+
+ kwargs["max_memory"] = reduce_max_memory_dict
+
if get_world_size() > 1:
kwargs["device_map"] = "balanced_low_0"
```
Modifications for `inference_server/cli.py`:
```diff
diff --git a/inference_server/cli.py b/inference_server/cli.py
index fc903d5..5450236 100644
--- a/inference_server/cli.py
+++ b/inference_server/cli.py
@@ -22,6 +22,9 @@ def main() -> None:
while True:
input_text = input("Input text: ")
+ input_text = input_text.strip()
+ modified_input_text = f"<human>: {input_text}\n<bot>:"
+
if input("change generate_kwargs? [y/n] ") == "y":
while True:
try:
@@ -33,7 +36,7 @@ def main() -> None:
print("message =", e_message)
continue
- response = model.generate(text=[input_text], generate_kwargs=generate_kwargs)
+ response = model.generate(text=[modified_input_text], generate_kwargs=generate_kwargs)
print_rank_0("Output text:", response.text[0])
print_rank_0("Generated tokens:", response.num_generated_tokens[0])
```
And now you are good to go!
Running command for bf16, NO sampling
```
python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype bf16 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": false, "max_new_tokens": 512}'
```
Running command for bf16, YES sampling
```
python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype bf16 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": true, "temperature": 0.8, "repetition_penalty": 1.2, "top_p": 0.9, "max_new_tokens": 512}'
```
---
Running command for int8 (sub optimal performance, but fast inference time) NO sampling:
```
python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype int8 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": false, "max_new_tokens": 512}'
```
Running command for int8 (sub optimal performance, but fast inference time) YES sampling:
```
python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype int8 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": true, "temperature": 0.8, "repetition_penalty": 1.2, "top_p": 0.9, "max_new_tokens": 512}'
```
**DISCLAIMER:** When using int8, the results will be subpar compared to bf16 as the model is being [quantized](https://huggingface.co/blog/hf-bitsandbytes-integration#introduction-to-model-quantization).
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.2
- Top-p: 0.9
- Max generated tokens: 512
### Suggested Prompts To Try in GPU Tutorial
```
Input text: Write a script in which Bob accidentally breaks his dad's guitar
```
```
Input text: Create an itemized list of tasks to complete to start a clothing brand
```
```
Input text: 十七岁的风是什么颜色的?
```
</details>
---
## Some example completions for English
<details>
<summary>Click to expand</summary>
```
<human>: What does it mean for the Fed to raise or lower interest rates?
<bot>: The Federal Reserve (also known as the Fed) is the central banking system of the United States. It was created in 1913, and is an independent agency of the federal government. The Fed has a dual mandate of maximizing employment and stabilizing prices. The Fed sets interest rates, and can influence the money supply and the credit markets through its actions.
Raising interest rates means that the Fed is trying to reduce the money supply in the economy, and is trying to discourage borrowing and encourage saving. Lowering interest rates means that the Fed is trying to stimulate economic growth by encouraging borrowing and reducing the cost of credit.
The Fed raises or lowers interest rates in order to influence the economy and achieve its goals for employment and price stability. Interest rates are one of the primary tools that the Fed uses to achieve its goals.
```
```
<human>: Write an engaging and well-written property listing description for selling a house. Address of the house and some of the details are given to you. Fill in the information gap with hallucinations if needed.
Property Address: 412 Monterey Ave, Capitola, CA 95010
Parking: 3 cars + a finished single car garage
Details:
- open floorplan
- window with views to park/mountains
- Kitchen with shaker cabinetry and white Calcatta quartz counters
<bot>: This home at 412 Monterey Ave, Capitola, CA 95010 is the perfect place to call home. With its open floor plan, it provides the perfect space for entertaining guests. The kitchen is equipped with shaker cabinetry and white Calcatta quartz counters, providing the perfect space to create culinary masterpieces. The window with views to the park and mountains provide a beautiful view of the outdoors, making this home feel like a piece of paradise. The three-car garage and finished single car garage provide ample parking space, making this home a must-see for anyone looking for a new home.
```
```
<human>: What is artificial intelligence? Explain it so an 8 year old will understand.
<bot>: Artificial Intelligence (AI) is a computer program or machine that can do things that would normally require human intelligence, such as recognizing patterns, making decisions, and learning from experience. AI can be found in many places, such as self-driving cars, smartphones, and even robots. AI is used to make things easier for people, like when your smartphone can translate text into another language or when your voice-controlled speaker can play music for you.
AI works by using algorithms and big data to learn and improve over time. For example, self-driving cars use AI to analyze the environment around them and make decisions about how to drive. The car gathers data from its sensors, processes it using algorithms, and adjusts its behavior accordingly. This process is called machine learning.
AI is still a developing technology, and there are still many things that AI programs cannot do yet. However, as AI continues to evolve, it is likely that it will become more and more capable of performing tasks that were once thought to require human intelligence.
```
</details>
---
## Some example completions for Multilingual
<details>
<summary>Click to expand</summary>
#### Arabic
```
<human>: هل العمل نصف دوام فكرة جيدة لمراهقين بين عمر ١٥ و ١٩ عام
<bot>: نعم، يمكن أن تكون فكرة جيدة لبعض المراهقين في هذا العمر. هناك العديد من الفوائد التي قد يحققها المراهقون إذا عملوا بنصف دوام:
- يساعدهم على تعلم المسؤولية وإدارة الوقت: عندما يعمل المراهق بنصف دوام، فإنه يتعلم كيفية إدارة وقته بشكل فعال وكيفية تقسيم ساعات عمله إلى مهام أصغر.
- يحسن مهارات التواصل والعمل الجماعي: غالبًا ما يكون مكان العمل بيئة متعددة الثقافات حيث يلتقي المراهقون بمجموعة متنوعة من الأشخاص ويتعلمون كيفية التعاون معهم وتبادل الأفكار.
- يوفر فرصة للتعرف على عالم العمل: يتيح العمل للمراهق التعرف على عالم العمل واكتساب خبرة عملية قيمة ستساعده لاحقًا عند البحث عن وظيفة أو بدء حياته المهنية.
- يعزز الشعور بالمسؤولية والثقة بالنفس: عندما يعمل المراهق بنصف دوام ويحصل على دخل خاص به، فإنه يكتسب شعورًا أكبر بالمسؤولية والثقة بالنفس.
ومع ذلك، لا ينبغي إجبار المراهقين على العمل بنصف دوام دون موافقتهم المسبقة. يجب أيضًا مراعاة أعمارهم وقدراتهم البدنية والعقلية قبل توظيفهم.
```
**DISCLAIMER**: Text ordering may be off for Arabic due to code block not being able to render
#### Chinese
```
<human>: 请向我推荐一部周末看的电影并说明原因。
<bot>: “强烈向您推荐《美丽人生》这部影片。该片由罗伯托·贝尼尼自导自演,讲述了二战时期一对犹太父子被送进了纳粹集中营,父亲用自己的想象力为儿子编织了一个美好童真的世界,让儿子的心灵没有受到丝毫伤害的故事。
该片被誉为“最伟大的战争片”、“人类有史以来最佳影片之一”,豆瓣评分高达9.6分。它不仅是一部感人的亲情片,更是一部富含哲理的励志片和人生启示录。片中父亲的乐观豁达、幽默风趣的人生态度,对任何事都抱有积极热情的态度,即使在残酷无比的集中营里也是如此,深深地感染着观众。同时,这也是一部关于爱与希望的电影,让我们更加珍惜当下所拥有的一切,学会如何去爱身边的每一个人。
总之,这是一部值得一看再看的好电影,相信您一定会喜欢的!
```
#### French
```
<human>: Il y a t’il un petit-déjeuner de 700-1000 calories sans oeuf mais qui inclut des protéines.
<bot>: Oui, voici une recette de petit déjeuner à base d'avoine et de fromage blanc :
Ingrédients :
- 1 tasse (250 ml) de lait d'amande ou autre lait végétal
- 2 cuillères à soupe bombées (30 g) de flocons d'avoine
- 1/2 banane mûre écrasée
- 1/4 de pomme fraîche épluchée et coupée en dés
- 1 cuillère à soupe de beurre d'amande ou autre purée d'oléagineux
- 1 cuillère à café de cannelle moulue
- 1 pincée de sel
Préparation :
1. Dans une petite casserole, faites chauffer le lait jusqu'à ce qu'il soit chaud.
2. Ajoutez les flocons d'avoine, la banane, la pomme, le beurre d'amande, la cannelle et le sel dans le bol du mixeur et mélangez bien pour obtenir une pâte homogène.
3. Versez le mélange dans le lait chaud et remuez rapidement avec une spatule en bois pour éviter que les flocons d'avoine ne s'agglutinent au fond de la casserole.
4. Laissez mijoter sur feu doux pendant environ 5 minutes tout en continuant de remuer régulièrement.
5. Transférez ensuite le porridge dans un bol et servez immédiatement. Vous pouvez ajouter quelques fruits rouges frais pour décorer et apporter encore plus de gourmandise à votre petit déjeuner ! Bon appétit !
```
#### Hindi
```
<human>: दिल्ली से जयपुर कैसे जाये? बस से या कार से? दोनो के बारे मे विस्तार मे बताये.
<bot>: दिल्ली से जयपुर जाने का सबसे आसान और तेज़ तरीका है कि आप अपनी खुद की कार लेकर जाएं, क्योंकि यह आपको अपने समय पर निर्भर रहने की स्वतंत्रता देता है। हालांकि, यदि आपके पास अपनी कार नहीं है, तो आप दिल्ली से जयपुर तक एक निजी वाहन किराए पर ले सकते हैं। यदि आप सार्वजनिक परिवहन (बस) का उपयोग करना चाहते हैं, तो दिल्ली में कई सरकारी और निजी बस सेवाएं उपलब्ध हैं जो जयपुर को जोड़ती हैं। दिल्ली से जयपुर जाने वाली कुछ प्रमुख बसें हैं-
1. राजस्थान रोडवेज: ये राज्य सरकार द्वारा संचालित बसें हैं जो दिल्ली से जयपुर के बीच चलती हैं। ये नियमित रूप से चलती हैं और इनके लिए अग्रिम आरक्षण आवश्यक नहीं है।
2. ALSA: यह एक निजी कंपनी है जो दिल्ली से जयपुर के बीच नियमित रूप से चलती है। इसके टिकटों की बुकिंग ऑनलाइन और ऑफलाइन दोनों माध्यमों से की जा सकती है।
3. RedBus: यह एक ऑनलाइन बस बुकिंग वेबसाइट है जहां से आप दिल्ली से जयपुर के बीच चलनेवाली विभिन्न कंपनियों की बसों में सीट आरक्षित करा सकते हैं।
4. Ola and Uber: इन ऐप्स का उपयोग करके आप दिल्ली से जयपुर तक कैब राइड बुक कर सकते हैं, लेकिन ध्यान रखें कि यह विकल्प सस्ता नहीं होगा। दिल्ली से जयपुर जाने का सबसे अच्छा विकल्प आपकी व्यक्तिगत आवश्यकताओं, बजट, और उपलब्धता पर निर्भर करता है। यदि आप अपनी खुद की कार रखते हैं, तो इसका लाभ उठाएं अन्यथा किसी भी सार्वजनिक परिवहन साधन का चयन करें जो आपकी आवश्यकताओं को पूरा करे।
```
#### Spanish
```
<human>: ¿Me puedes dar algunas recomendaciones de juegos? Estoy buscando juegos de rol con un sistema de magia único, un sistema de construcción único y una buena historia. Por favor limítate a juegos que se puedan jugar en PC directamente, o por emulación.
<bot>: Claro! Aquí te dejo tres recomendaciones:
1. The Outer Worlds: es un juego de rol de ciencia ficción con un sistema de magia basado en la tecnología alienígena que permite al jugador personalizar sus habilidades para adaptarse a su estilo de juego.
2. Divinity: Original Sin 2 - Definitive Edition: este juego combina elementos de RPG con estrategia en tiempo real (RTS), lo que le da al jugador un control total sobre cómo construir y mejorar su personaje y el mundo que lo rodea.
3. Pillars of Eternity: esta aventura de fantasía oscura tiene lugar en un vasto mundo abierto donde los jugadores pueden elegir entre diferentes clases y desarrollar relaciones con otros personajes del juego. La mecánica de combate está basada en turnos y utiliza hechizos y armas cuerpo a cuerpo.
Estos son solo algunos ejemplos de juegos que podrían interesarte según tus criterios. Hay muchos más títulos disponibles en el mercado que podrían ajustarse a tu gusto, así que no dudes en explorar otras opciones si estos juegos no cumplen con tus expectativas.
```
</details>
---
## Evaluation Graphs
<details>
<summary>Click to expand</summary>
<!-- This section describes the evaluation protocols and provides the results. -->

<figure style="text-align:center;">
<figcaption><b>BLOOMChat vs Baselines Model in Human Preference Rankings</b></figcaption>
</figure>
<br>

<figure style="text-align:center;">
<figcaption><b>BLOOMChat vs GPT-4 in Human Preference Ranking</b></figcaption>
</figure>
<br>

<figure style="text-align:center;">
<figcaption><b>BLOOMChat surpasses other Bloom variants and state-of-the-art open-source chat models in translation tasks [NOTE: Evaluation of the BLOOM and BLOOMZ in WMT18 en->zh zh->en used (human, bot) ChatML tags due to an unintentional configuration. Results might be suboptimal.]</b></figcaption>
</figure>
<br>
</details>
---
## Training Details
<details>
<summary>Click to expand</summary>
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
- [OIG dataset from OpenChatKit](https://huggingface.co/datasets/laion/OIG)
- [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
- [Oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We trained BLOOMChat with [SambaNova DataScale systems](https://sambanova.ai/products/datascale/) with SambaNova's in-house Reconfigurable Dataflow Unit (RDU). We started from [BLOOM (176B)](https://huggingface.co/bigscience/bloom), an open-source multilingual LLM pretrained by the [BigScience group](https://huggingface.co/bigscience). We instruction-tune BLOOM (176B) on OpenChatKit with each data source subsampled to 100k for one epoch, followed by three epochs over the combined OpenChatKit and Dolly 2.0.
All of the code used to prepare the datasets and the scripts to run training and inference are open-sourced and freely available at [sambanova/bloomchat](https://github.com/sambanova/bloomchat/tree/main)
### Prompting Style Used For Training
```
<human>: {input1 that the user wants from the bot}
<bot>: {response1}</s>
<human>: {input2 that the user wants from the bot}
<bot>: {response2}</s>
```
### Hyperparameters
**Instruction-tuned Training on OIG**
- Hardware: SambaNova Reconfigurable Dataflow Unit (RDU)
- Optimizer: AdamW
- Grad accumulation: 1
- Epochs: 1
- Global Batch size: 128
- Batch tokens: 128 * 2048 = 262,144 tokens
- Learning Rate: 1e-5
- Learning Rate Scheduler: Cosine Schedule with Warmup
- Warmup Steps: 0
- End Learning Ratio: 0.1
- Weight decay: 0.1
**Instruction-tuned Training on Dolly 2.0 and Oasst1**
- Hardware: SambaNova Reconfigurable Dataflow Unit (RDU)
- Optimizer: AdamW
- Grad accumulation: 1
- Epochs: 3
- Global Batch size: 128
- Batch tokens: 128 * 2048 = 262,144 tokens
- Learning Rate: 1e-5
- Learning Rate Scheduler: Cosine Schedule with Warmup
- Warmup Steps: 0
- End Learning Ratio: 0.1
- Weight decay: 0.1
</details>
---
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, BLOOMChat has certain limitations:
- Hallucination: BLOOMChat may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: BLOOMChat may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: BLOOMChat may inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgment
We would like to extend our gratitude to [Together](https://www.together.xyz/) for their insightful technical discussions on overall project planning, data processing, model training, human evaluation experiment design, open-source endeavors, and their contributions on data processing code on OpenChatKit, OASST1, and Dolly 2.0.
We are grateful to the various researchers and open-source projects that have contributed to the development of BLOOMChat. We thank [BigScience](https://bigscience.huggingface.co/) for providing the [BLOOM](https://huggingface.co/bigscience/bloom) model, which served as the base for our instruction tuning. We also thank [LAION](https://laion.ai/) for their [OIG dataset](https://huggingface.co/datasets/laion/OIG), OpenAssistant Conversations Dataset ([OASST1](https://huggingface.co/datasets/OpenAssistant/oasst1)) and also thank [Databricks](https://www.databricks.com/) for providing [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k), to provide the dataset that we instruction tuned on.
We appreciate [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [BigScience](https://bigscience.huggingface.co/) for their essential benchmarking contributions, which is very helpful in evaluating BLOOMChat's performance. We appreciate the inspiration from the wave of various recent open-source chat models, including [OpenAssistant-30B](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor), [LLaMA-Adapter-V2-65B](https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/llama_adapter_v2_chat65b), [Vicuna-13b](https://huggingface.co/lmsys/vicuna-13b-delta-v0), [Koala-13b](https://huggingface.co/TheBloke/koala-13B-HF), [OASST-Pythia-12b](https://huggingface.co/OpenAssistant/oasst-sft-1-pythia-12b), [Alpaca-13b](https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g), [ChatGLM-6b](https://github.com/THUDM/ChatGLM-6B), [FastChat-T5-3b](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0), [Dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), [LLaMA-13b](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/), [StableLM-Tuned-Alpha-7b](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b), [RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1), [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat) and so on. We look forward to witnessing the continued growth and success of open-source chat-based models.
We highly appreciate the hard work and dedication of these researchers and organizations towards the advancement of the open-source community. Their contributions were invaluable in the development of BLOOMChat, and we hope that our model can contribute to further advancements in the field.
## Cite BLOOMChat
```
@software{bloomchat,
title = {{BLOOMChat: a New Open Multilingual Chat LLM}},
author = {SambaNova Systems, Together Computer},
url = {https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1}
month = {5},
year = {2023},
version = {1.0},
}
```
|
jordyvl/dit-tiny_tobacco3482_kd_MSE
|
jordyvl
| 2023-07-07T22:00:12Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-07T21:48:19Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-tiny_tobacco3482_kd_MSE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-tiny_tobacco3482_kd_MSE
This model is a fine-tuned version of [microsoft/dit-base](https://huggingface.co/microsoft/dit-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8328
- Accuracy: 0.19
- Brier Loss: 0.8942
- Nll: 7.0296
- F1 Micro: 0.19
- F1 Macro: 0.0703
- Ece: 0.2429
- Aurc: 0.8146
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 0.96 | 3 | 7.1188 | 0.145 | 0.9003 | 10.1627 | 0.145 | 0.0253 | 0.2218 | 0.8463 |
| No log | 1.96 | 6 | 7.0608 | 0.145 | 0.8969 | 9.8809 | 0.145 | 0.0253 | 0.2197 | 0.8454 |
| No log | 2.96 | 9 | 6.9777 | 0.145 | 0.8929 | 8.9712 | 0.145 | 0.0442 | 0.2065 | 0.7921 |
| No log | 3.96 | 12 | 6.9144 | 0.17 | 0.8908 | 4.9924 | 0.17 | 0.0413 | 0.2325 | 0.7807 |
| No log | 4.96 | 15 | 6.8797 | 0.145 | 0.8912 | 6.8983 | 0.145 | 0.0399 | 0.2089 | 0.7932 |
| No log | 5.96 | 18 | 6.8636 | 0.085 | 0.8926 | 6.9917 | 0.085 | 0.0299 | 0.1822 | 0.8755 |
| No log | 6.96 | 21 | 6.8545 | 0.075 | 0.8946 | 7.0604 | 0.075 | 0.0307 | 0.1849 | 0.8758 |
| No log | 7.96 | 24 | 6.8486 | 0.06 | 0.8958 | 7.1035 | 0.06 | 0.0230 | 0.1801 | 0.8891 |
| No log | 8.96 | 27 | 6.8455 | 0.165 | 0.8967 | 7.1315 | 0.165 | 0.0604 | 0.2414 | 0.8438 |
| No log | 9.96 | 30 | 6.8450 | 0.185 | 0.8973 | 7.1546 | 0.185 | 0.0468 | 0.2477 | 0.8436 |
| No log | 10.96 | 33 | 6.8438 | 0.18 | 0.8969 | 7.1569 | 0.18 | 0.0308 | 0.2406 | 0.8504 |
| No log | 11.96 | 36 | 6.8414 | 0.18 | 0.8962 | 7.1492 | 0.18 | 0.0306 | 0.2510 | 0.8501 |
| No log | 12.96 | 39 | 6.8390 | 0.18 | 0.8958 | 7.1455 | 0.18 | 0.0306 | 0.2374 | 0.8494 |
| No log | 13.96 | 42 | 6.8365 | 0.18 | 0.8950 | 7.0793 | 0.18 | 0.0306 | 0.2436 | 0.8488 |
| No log | 14.96 | 45 | 6.8349 | 0.18 | 0.8944 | 7.0591 | 0.18 | 0.0306 | 0.2369 | 0.8486 |
| No log | 15.96 | 48 | 6.8338 | 0.18 | 0.8942 | 7.0493 | 0.18 | 0.0306 | 0.2396 | 0.8482 |
| No log | 16.96 | 51 | 6.8335 | 0.18 | 0.8940 | 7.0429 | 0.18 | 0.0309 | 0.2390 | 0.8486 |
| No log | 17.96 | 54 | 6.8341 | 0.18 | 0.8943 | 7.0410 | 0.18 | 0.0314 | 0.2351 | 0.8514 |
| No log | 18.96 | 57 | 6.8338 | 0.19 | 0.8943 | 7.0391 | 0.19 | 0.0495 | 0.2480 | 0.8471 |
| No log | 19.96 | 60 | 6.8335 | 0.205 | 0.8943 | 7.0342 | 0.205 | 0.0722 | 0.2562 | 0.8204 |
| No log | 20.96 | 63 | 6.8334 | 0.2 | 0.8942 | 7.0308 | 0.2000 | 0.0683 | 0.2541 | 0.8199 |
| No log | 21.96 | 66 | 6.8332 | 0.195 | 0.8942 | 7.0296 | 0.195 | 0.0714 | 0.2511 | 0.8099 |
| No log | 22.96 | 69 | 6.8330 | 0.195 | 0.8942 | 7.0297 | 0.195 | 0.0717 | 0.2572 | 0.8123 |
| No log | 23.96 | 72 | 6.8329 | 0.19 | 0.8942 | 7.0294 | 0.19 | 0.0703 | 0.2459 | 0.8148 |
| No log | 24.96 | 75 | 6.8328 | 0.19 | 0.8942 | 7.0296 | 0.19 | 0.0703 | 0.2429 | 0.8146 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
HeshamMamdouh/arabart-finetune-sum-v5-fine-tuned
|
HeshamMamdouh
| 2023-07-07T21:56:34Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"mbart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-07T21:53:30Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: arabart-finetune-sum-v5-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# arabart-finetune-sum-v5-fine-tuned
This model is a fine-tuned version of [abdalrahmanshahrour/AraBART-summ](https://huggingface.co/abdalrahmanshahrour/AraBART-summ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.6392
- Validation Loss: 2.9100
- Train Lr: 2e-05
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Lr | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 3.1837 | 2.8523 | 2e-05 | 0 |
| 3.0537 | 2.8248 | 2e-05 | 1 |
| 2.9419 | 2.8509 | 2e-05 | 2 |
| 2.8629 | 2.8580 | 2e-05 | 3 |
| 2.8086 | 2.8829 | 2e-05 | 4 |
| 2.7110 | 2.8474 | 2e-05 | 5 |
| 2.6392 | 2.9100 | 2e-05 | 6 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Calam1/t5-small-finetuned-wikisql
|
Calam1
| 2023-07-07T21:54:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikisql",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-07T14:01:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikisql
model-index:
- name: t5-small-finetuned-wikisql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikisql dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1246
- Rouge2 Precision: 0.8182
- Rouge2 Recall: 0.7261
- Rouge2 Fmeasure: 0.7623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.1953 | 1.0 | 4049 | 0.1574 | 0.7939 | 0.7035 | 0.739 |
| 0.1644 | 2.0 | 8098 | 0.1375 | 0.8082 | 0.7167 | 0.7527 |
| 0.1517 | 3.0 | 12147 | 0.1296 | 0.8141 | 0.7223 | 0.7584 |
| 0.146 | 4.0 | 16196 | 0.1256 | 0.817 | 0.7253 | 0.7613 |
| 0.1413 | 5.0 | 20245 | 0.1246 | 0.8182 | 0.7261 | 0.7623 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
andressrg/textual_inversion_meal_0_100
|
andressrg
| 2023-07-07T21:52:33Z | 32 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T21:40:21Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - andressrg/textual_inversion_meal_0_100
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
LanzerPotaz/Dumb_Huggy
|
LanzerPotaz
| 2023-07-07T21:29:21Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-07T21:29:15Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: LanzerPotaz/Dumb_Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
juanka0357/bitcoin-sentiment-analisis
|
juanka0357
| 2023-07-07T21:28:49Z | 0 | 0 | null |
[
"text2text-generation",
"region:us"
] |
text2text-generation
| 2023-07-07T20:54:05Z |
---
pipeline_tag: text2text-generation
---
|
openlm-research/open_llama_7b_v2_easylm
|
openlm-research
| 2023-07-07T21:26:52Z | 0 | 4 | null |
[
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"license:apache-2.0",
"region:us"
] | null | 2023-07-07T19:52:09Z |
---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
---
# OpenLLaMA: An Open Reproduction of LLaMA
**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
## v2 models
model_path = 'openlm-research/open_llama_7b_v2'
## v1 models
# model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
# model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
## Dataset and Training
The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
| ---------------------- | -------- | -------- | --------- | -------------- | ------------ | ------------ | ------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.34 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.39 | 0.35 | 0.38 | 0.40 |
| arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.39 | 0.34 | 0.37 | 0.41 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.41 | 0.37 | 0.38 | 0.44 |
| arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.73 | 0.69 | 0.72 | 0.75 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.70 | 0.65 | 0.68 | 0.70 |
| boolq/acc | 0.66 | 0.75 | 0.71 | 0.72 | 0.68 | 0.71 | 0.75 |
| hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.56 | 0.49 | 0.53 | 0.56 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.75 | 0.67 | 0.72 | 0.76 |
| openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.30 | 0.27 | 0.30 | 0.31 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.41 | 0.40 | 0.40 | 0.43 |
| piqa/acc | 0.75 | 0.78 | 0.79 | 0.79 | 0.75 | 0.76 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.80 | 0.76 | 0.77 | 0.79 |
| record/em | 0.88 | 0.91 | 0.92 | 0.89 | 0.88 | 0.89 | 0.91 |
| record/f1 | 0.89 | 0.91 | 0.92 | 0.89 | 0.89 | 0.90 | 0.91 |
| rte/acc | 0.54 | 0.56 | 0.69 | 0.57 | 0.58 | 0.60 | 0.64 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.23 | 0.22 | 0.23 | 0.25 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.38 |
| wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 |
| winogrande/acc | 0.64 | 0.68 | 0.70 | 0.66 | 0.62 | 0.67 | 0.70 |
| Average | 0.52 | 0.55 | 0.57 | 0.56 | 0.53 | 0.55 | 0.57 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
openlm-research/open_llama_7b_v2
|
openlm-research
| 2023-07-07T21:26:13Z | 3,256 | 116 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-06T08:23:04Z |
---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
library_name: transformers
---
# OpenLLaMA: An Open Reproduction of LLaMA
**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
## v2 models
model_path = 'openlm-research/open_llama_7b_v2'
## v1 models
# model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
# model_path = 'openlm-research/open_llama_13b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
## Dataset and Training
The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
| ---------------------- | -------- | -------- | --------- | -------------- | ------------ | ------------ | ------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.34 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.39 | 0.35 | 0.38 | 0.40 |
| arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.39 | 0.34 | 0.37 | 0.41 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.41 | 0.37 | 0.38 | 0.44 |
| arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.73 | 0.69 | 0.72 | 0.75 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.70 | 0.65 | 0.68 | 0.70 |
| boolq/acc | 0.66 | 0.75 | 0.71 | 0.72 | 0.68 | 0.71 | 0.75 |
| hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.56 | 0.49 | 0.53 | 0.56 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.75 | 0.67 | 0.72 | 0.76 |
| openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.30 | 0.27 | 0.30 | 0.31 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.41 | 0.40 | 0.40 | 0.43 |
| piqa/acc | 0.75 | 0.78 | 0.79 | 0.79 | 0.75 | 0.76 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.80 | 0.76 | 0.77 | 0.79 |
| record/em | 0.88 | 0.91 | 0.92 | 0.89 | 0.88 | 0.89 | 0.91 |
| record/f1 | 0.89 | 0.91 | 0.92 | 0.89 | 0.89 | 0.90 | 0.91 |
| rte/acc | 0.54 | 0.56 | 0.69 | 0.57 | 0.58 | 0.60 | 0.64 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.23 | 0.22 | 0.23 | 0.25 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.38 |
| wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 |
| winogrande/acc | 0.64 | 0.68 | 0.70 | 0.66 | 0.62 | 0.67 | 0.70 |
| Average | 0.52 | 0.55 | 0.57 | 0.56 | 0.53 | 0.55 | 0.57 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
TheBloke/WizardLM-Uncensored-Falcon-7B-GGML
|
TheBloke
| 2023-07-07T21:24:51Z | 19 | 30 |
transformers
|
[
"transformers",
"falcon",
"license:other",
"region:us"
] | null | 2023-06-21T13:32:42Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Eric Hartford's WizardLM Uncensored Falcon 7B GGML
These files are GGML format model files for [Eric Hartford's WizardLM Uncensored Falcon 7B](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b).
These files will **not** work in llama.cpp, text-generation-webui or KoboldCpp.
GGCC is a new format created in a new fork of llama.cpp that introduced this new Falcon GGML-based support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp).
Currently these files will also not work with code that previously supported Falcon, such as LoLLMs Web UI and ctransformers. But support should be added soon.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-Falcon-7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b)
## Prompt template: WizardLM
```
prompt
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
To build cmp-nct's fork of llama.cpp with Falcon support plus CUDA acceleration, please try the following steps:
```
git clone https://github.com/cmp-nct/ggllm.cpp
cd ggllm.cpp
rm -rf build && mkdir build && cd build && cmake -DGGML_CUBLAS=1 .. && cmake --build . --config Release
```
Compiling on Windows: developer cmp-nct notes: 'I personally compile it using VScode. When compiling with CUDA support using the Microsoft compiler it's essential to select the "Community edition build tools". Otherwise CUDA won't compile.'
Once compiled you can then use `bin/falcon_main` just like you would use llama.cpp. For example:
```
bin/falcon_main -t 8 -ngl 100 -b 1 -m wizardlm-7b-uncensored.ggccv1.q4_0.bin -enc -p "write a story about llamas"
```
Parameter `-enc` should automatically use the right prompt template for the model, so you can just enter your desired prompt.
You can specify `-ngl 100` regardles of your VRAM, as it will automatically detect how much VRAM is available to be used.
Adjust `-t 8` (the number of CPU cores to use) according to what performs best on your system. Do not exceed the number of physical CPU cores you have.
`-b 1` reduces batch size to 1. This slightly lowers prompt evaluation time, but frees up VRAM to load more of the model on to your GPU. If you find prompt evaluation too slow and have enough spare VRAM, you can remove this parameter.
Please see https://github.com/cmp-nct/ggllm.cpp for further details and instructions.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| wizardlm-7b-uncensored.ggccv1.q4_0.bin | q4_0 | 4 | 4.06 GB| 6.56 GB | Original quant method, 4-bit. |
| wizardlm-7b-uncensored.ggccv1.q4_1.bin | q4_1 | 4 | 4.51 GB| 7.01 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| wizardlm-7b-uncensored.ggccv1.q5_0.bin | q5_0 | 5 | 4.96 GB| 7.46 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| wizardlm-7b-uncensored.ggccv1.q5_1.bin | q5_1 | 5 | 5.42 GB| 7.92 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| wizardlm-7b-uncensored.ggccv1.q8_0.bin | q8_0 | 8 | 7.67 GB| 10.17 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Eric Hartford's WizardLM Uncensored Falcon 7B
This is WizardLM trained on top of tiiuae/falcon-7b, with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
Prompt format is Wizardlm.
```
What is a falcon? Can I keep one as a pet?
### Response:
```
|
odunola/e5-base-v2
|
odunola
| 2023-07-07T21:16:05Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"mteb",
"feature-extraction",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-07-07T20:58:07Z |
---
tags:
- mteb
model-index:
- name: e5-base-v2
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.77611940298506
- type: ap
value: 42.052710266606056
- type: f1
value: 72.12040628266567
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.81012500000001
- type: ap
value: 89.4213700757244
- type: f1
value: 92.8039091197065
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.711999999999996
- type: f1
value: 46.11544975436018
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.186
- type: map_at_10
value: 36.632999999999996
- type: map_at_100
value: 37.842
- type: map_at_1000
value: 37.865
- type: map_at_3
value: 32.278
- type: map_at_5
value: 34.760999999999996
- type: mrr_at_1
value: 23.400000000000002
- type: mrr_at_10
value: 36.721
- type: mrr_at_100
value: 37.937
- type: mrr_at_1000
value: 37.96
- type: mrr_at_3
value: 32.302
- type: mrr_at_5
value: 34.894
- type: ndcg_at_1
value: 23.186
- type: ndcg_at_10
value: 44.49
- type: ndcg_at_100
value: 50.065000000000005
- type: ndcg_at_1000
value: 50.629999999999995
- type: ndcg_at_3
value: 35.461
- type: ndcg_at_5
value: 39.969
- type: precision_at_1
value: 23.186
- type: precision_at_10
value: 6.97
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.912
- type: precision_at_5
value: 11.152
- type: recall_at_1
value: 23.186
- type: recall_at_10
value: 69.70100000000001
- type: recall_at_100
value: 95.092
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 44.737
- type: recall_at_5
value: 55.761
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.10312401440185
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 39.67275326095384
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.97793816337376
- type: mrr
value: 72.76832431957087
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.11646947018187
- type: cos_sim_spearman
value: 81.40064994975234
- type: euclidean_pearson
value: 82.37355689019232
- type: euclidean_spearman
value: 81.6777646977348
- type: manhattan_pearson
value: 82.61101422716945
- type: manhattan_spearman
value: 81.80427360442245
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 83.52922077922076
- type: f1
value: 83.45298679360866
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.495115019668496
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.724792944166765
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.361000000000004
- type: map_at_10
value: 43.765
- type: map_at_100
value: 45.224
- type: map_at_1000
value: 45.35
- type: map_at_3
value: 40.353
- type: map_at_5
value: 42.195
- type: mrr_at_1
value: 40.629
- type: mrr_at_10
value: 50.458000000000006
- type: mrr_at_100
value: 51.06699999999999
- type: mrr_at_1000
value: 51.12
- type: mrr_at_3
value: 47.902
- type: mrr_at_5
value: 49.447
- type: ndcg_at_1
value: 40.629
- type: ndcg_at_10
value: 50.376
- type: ndcg_at_100
value: 55.065
- type: ndcg_at_1000
value: 57.196000000000005
- type: ndcg_at_3
value: 45.616
- type: ndcg_at_5
value: 47.646
- type: precision_at_1
value: 40.629
- type: precision_at_10
value: 9.785
- type: precision_at_100
value: 1.562
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 22.031
- type: precision_at_5
value: 15.737000000000002
- type: recall_at_1
value: 32.361000000000004
- type: recall_at_10
value: 62.214000000000006
- type: recall_at_100
value: 81.464
- type: recall_at_1000
value: 95.905
- type: recall_at_3
value: 47.5
- type: recall_at_5
value: 53.69500000000001
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.971
- type: map_at_10
value: 37.444
- type: map_at_100
value: 38.607
- type: map_at_1000
value: 38.737
- type: map_at_3
value: 34.504000000000005
- type: map_at_5
value: 36.234
- type: mrr_at_1
value: 35.35
- type: mrr_at_10
value: 43.441
- type: mrr_at_100
value: 44.147999999999996
- type: mrr_at_1000
value: 44.196000000000005
- type: mrr_at_3
value: 41.285
- type: mrr_at_5
value: 42.552
- type: ndcg_at_1
value: 35.35
- type: ndcg_at_10
value: 42.903999999999996
- type: ndcg_at_100
value: 47.406
- type: ndcg_at_1000
value: 49.588
- type: ndcg_at_3
value: 38.778
- type: ndcg_at_5
value: 40.788000000000004
- type: precision_at_1
value: 35.35
- type: precision_at_10
value: 8.083
- type: precision_at_100
value: 1.313
- type: precision_at_1000
value: 0.18
- type: precision_at_3
value: 18.769
- type: precision_at_5
value: 13.439
- type: recall_at_1
value: 27.971
- type: recall_at_10
value: 52.492000000000004
- type: recall_at_100
value: 71.642
- type: recall_at_1000
value: 85.488
- type: recall_at_3
value: 40.1
- type: recall_at_5
value: 45.800000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.898
- type: map_at_10
value: 51.819
- type: map_at_100
value: 52.886
- type: map_at_1000
value: 52.941
- type: map_at_3
value: 48.619
- type: map_at_5
value: 50.493
- type: mrr_at_1
value: 45.391999999999996
- type: mrr_at_10
value: 55.230000000000004
- type: mrr_at_100
value: 55.887
- type: mrr_at_1000
value: 55.916
- type: mrr_at_3
value: 52.717000000000006
- type: mrr_at_5
value: 54.222
- type: ndcg_at_1
value: 45.391999999999996
- type: ndcg_at_10
value: 57.586999999999996
- type: ndcg_at_100
value: 61.745000000000005
- type: ndcg_at_1000
value: 62.83800000000001
- type: ndcg_at_3
value: 52.207
- type: ndcg_at_5
value: 54.925999999999995
- type: precision_at_1
value: 45.391999999999996
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.226
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 23.177
- type: precision_at_5
value: 16.038
- type: recall_at_1
value: 39.898
- type: recall_at_10
value: 71.18900000000001
- type: recall_at_100
value: 89.082
- type: recall_at_1000
value: 96.865
- type: recall_at_3
value: 56.907
- type: recall_at_5
value: 63.397999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.706
- type: map_at_10
value: 30.818
- type: map_at_100
value: 32.038
- type: map_at_1000
value: 32.123000000000005
- type: map_at_3
value: 28.077
- type: map_at_5
value: 29.709999999999997
- type: mrr_at_1
value: 24.407
- type: mrr_at_10
value: 32.555
- type: mrr_at_100
value: 33.692
- type: mrr_at_1000
value: 33.751
- type: mrr_at_3
value: 29.848999999999997
- type: mrr_at_5
value: 31.509999999999998
- type: ndcg_at_1
value: 24.407
- type: ndcg_at_10
value: 35.624
- type: ndcg_at_100
value: 41.454
- type: ndcg_at_1000
value: 43.556
- type: ndcg_at_3
value: 30.217
- type: ndcg_at_5
value: 33.111000000000004
- type: precision_at_1
value: 24.407
- type: precision_at_10
value: 5.548
- type: precision_at_100
value: 0.8869999999999999
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 12.731
- type: precision_at_5
value: 9.22
- type: recall_at_1
value: 22.706
- type: recall_at_10
value: 48.772
- type: recall_at_100
value: 75.053
- type: recall_at_1000
value: 90.731
- type: recall_at_3
value: 34.421
- type: recall_at_5
value: 41.427
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.424
- type: map_at_10
value: 21.09
- type: map_at_100
value: 22.264999999999997
- type: map_at_1000
value: 22.402
- type: map_at_3
value: 18.312
- type: map_at_5
value: 19.874
- type: mrr_at_1
value: 16.915
- type: mrr_at_10
value: 25.258000000000003
- type: mrr_at_100
value: 26.228
- type: mrr_at_1000
value: 26.31
- type: mrr_at_3
value: 22.492
- type: mrr_at_5
value: 24.04
- type: ndcg_at_1
value: 16.915
- type: ndcg_at_10
value: 26.266000000000002
- type: ndcg_at_100
value: 32.08
- type: ndcg_at_1000
value: 35.086
- type: ndcg_at_3
value: 21.049
- type: ndcg_at_5
value: 23.508000000000003
- type: precision_at_1
value: 16.915
- type: precision_at_10
value: 5.1
- type: precision_at_100
value: 0.9329999999999999
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 10.282
- type: precision_at_5
value: 7.836
- type: recall_at_1
value: 13.424
- type: recall_at_10
value: 38.179
- type: recall_at_100
value: 63.906
- type: recall_at_1000
value: 84.933
- type: recall_at_3
value: 23.878
- type: recall_at_5
value: 30.037999999999997
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.154
- type: map_at_10
value: 35.912
- type: map_at_100
value: 37.211
- type: map_at_1000
value: 37.327
- type: map_at_3
value: 32.684999999999995
- type: map_at_5
value: 34.562
- type: mrr_at_1
value: 32.435
- type: mrr_at_10
value: 41.411
- type: mrr_at_100
value: 42.297000000000004
- type: mrr_at_1000
value: 42.345
- type: mrr_at_3
value: 38.771
- type: mrr_at_5
value: 40.33
- type: ndcg_at_1
value: 32.435
- type: ndcg_at_10
value: 41.785
- type: ndcg_at_100
value: 47.469
- type: ndcg_at_1000
value: 49.685
- type: ndcg_at_3
value: 36.618
- type: ndcg_at_5
value: 39.101
- type: precision_at_1
value: 32.435
- type: precision_at_10
value: 7.642
- type: precision_at_100
value: 1.244
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 17.485
- type: precision_at_5
value: 12.57
- type: recall_at_1
value: 26.154
- type: recall_at_10
value: 54.111
- type: recall_at_100
value: 78.348
- type: recall_at_1000
value: 92.996
- type: recall_at_3
value: 39.189
- type: recall_at_5
value: 45.852
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.308999999999997
- type: map_at_10
value: 35.524
- type: map_at_100
value: 36.774
- type: map_at_1000
value: 36.891
- type: map_at_3
value: 32.561
- type: map_at_5
value: 34.034
- type: mrr_at_1
value: 31.735000000000003
- type: mrr_at_10
value: 40.391
- type: mrr_at_100
value: 41.227000000000004
- type: mrr_at_1000
value: 41.288000000000004
- type: mrr_at_3
value: 37.938
- type: mrr_at_5
value: 39.193
- type: ndcg_at_1
value: 31.735000000000003
- type: ndcg_at_10
value: 41.166000000000004
- type: ndcg_at_100
value: 46.702
- type: ndcg_at_1000
value: 49.157000000000004
- type: ndcg_at_3
value: 36.274
- type: ndcg_at_5
value: 38.177
- type: precision_at_1
value: 31.735000000000003
- type: precision_at_10
value: 7.5569999999999995
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 17.199
- type: precision_at_5
value: 12.123000000000001
- type: recall_at_1
value: 26.308999999999997
- type: recall_at_10
value: 53.083000000000006
- type: recall_at_100
value: 76.922
- type: recall_at_1000
value: 93.767
- type: recall_at_3
value: 39.262
- type: recall_at_5
value: 44.413000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.391250000000003
- type: map_at_10
value: 33.280166666666666
- type: map_at_100
value: 34.49566666666667
- type: map_at_1000
value: 34.61533333333333
- type: map_at_3
value: 30.52183333333333
- type: map_at_5
value: 32.06608333333333
- type: mrr_at_1
value: 29.105083333333337
- type: mrr_at_10
value: 37.44766666666666
- type: mrr_at_100
value: 38.32491666666667
- type: mrr_at_1000
value: 38.385666666666665
- type: mrr_at_3
value: 35.06883333333333
- type: mrr_at_5
value: 36.42066666666667
- type: ndcg_at_1
value: 29.105083333333337
- type: ndcg_at_10
value: 38.54358333333333
- type: ndcg_at_100
value: 43.833583333333344
- type: ndcg_at_1000
value: 46.215333333333334
- type: ndcg_at_3
value: 33.876
- type: ndcg_at_5
value: 36.05208333333333
- type: precision_at_1
value: 29.105083333333337
- type: precision_at_10
value: 6.823416666666665
- type: precision_at_100
value: 1.1270833333333334
- type: precision_at_1000
value: 0.15208333333333332
- type: precision_at_3
value: 15.696750000000002
- type: precision_at_5
value: 11.193499999999998
- type: recall_at_1
value: 24.391250000000003
- type: recall_at_10
value: 49.98808333333333
- type: recall_at_100
value: 73.31616666666666
- type: recall_at_1000
value: 89.96291666666667
- type: recall_at_3
value: 36.86666666666667
- type: recall_at_5
value: 42.54350000000001
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.995
- type: map_at_10
value: 28.807
- type: map_at_100
value: 29.813000000000002
- type: map_at_1000
value: 29.903000000000002
- type: map_at_3
value: 26.636
- type: map_at_5
value: 27.912
- type: mrr_at_1
value: 24.847
- type: mrr_at_10
value: 31.494
- type: mrr_at_100
value: 32.381
- type: mrr_at_1000
value: 32.446999999999996
- type: mrr_at_3
value: 29.473
- type: mrr_at_5
value: 30.7
- type: ndcg_at_1
value: 24.847
- type: ndcg_at_10
value: 32.818999999999996
- type: ndcg_at_100
value: 37.835
- type: ndcg_at_1000
value: 40.226
- type: ndcg_at_3
value: 28.811999999999998
- type: ndcg_at_5
value: 30.875999999999998
- type: precision_at_1
value: 24.847
- type: precision_at_10
value: 5.244999999999999
- type: precision_at_100
value: 0.856
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 12.577
- type: precision_at_5
value: 8.895999999999999
- type: recall_at_1
value: 21.995
- type: recall_at_10
value: 42.479
- type: recall_at_100
value: 65.337
- type: recall_at_1000
value: 83.23700000000001
- type: recall_at_3
value: 31.573
- type: recall_at_5
value: 36.684
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.751000000000001
- type: map_at_10
value: 21.909
- type: map_at_100
value: 23.064
- type: map_at_1000
value: 23.205000000000002
- type: map_at_3
value: 20.138
- type: map_at_5
value: 20.973
- type: mrr_at_1
value: 19.305
- type: mrr_at_10
value: 25.647
- type: mrr_at_100
value: 26.659
- type: mrr_at_1000
value: 26.748
- type: mrr_at_3
value: 23.933
- type: mrr_at_5
value: 24.754
- type: ndcg_at_1
value: 19.305
- type: ndcg_at_10
value: 25.886
- type: ndcg_at_100
value: 31.56
- type: ndcg_at_1000
value: 34.799
- type: ndcg_at_3
value: 22.708000000000002
- type: ndcg_at_5
value: 23.838
- type: precision_at_1
value: 19.305
- type: precision_at_10
value: 4.677
- type: precision_at_100
value: 0.895
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 10.771
- type: precision_at_5
value: 7.46
- type: recall_at_1
value: 15.751000000000001
- type: recall_at_10
value: 34.156
- type: recall_at_100
value: 59.899
- type: recall_at_1000
value: 83.08
- type: recall_at_3
value: 24.772
- type: recall_at_5
value: 28.009
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.34
- type: map_at_10
value: 32.383
- type: map_at_100
value: 33.629999999999995
- type: map_at_1000
value: 33.735
- type: map_at_3
value: 29.68
- type: map_at_5
value: 31.270999999999997
- type: mrr_at_1
value: 27.612
- type: mrr_at_10
value: 36.381
- type: mrr_at_100
value: 37.351
- type: mrr_at_1000
value: 37.411
- type: mrr_at_3
value: 33.893
- type: mrr_at_5
value: 35.353
- type: ndcg_at_1
value: 27.612
- type: ndcg_at_10
value: 37.714999999999996
- type: ndcg_at_100
value: 43.525000000000006
- type: ndcg_at_1000
value: 45.812999999999995
- type: ndcg_at_3
value: 32.796
- type: ndcg_at_5
value: 35.243
- type: precision_at_1
value: 27.612
- type: precision_at_10
value: 6.465
- type: precision_at_100
value: 1.0619999999999998
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 15.049999999999999
- type: precision_at_5
value: 10.764999999999999
- type: recall_at_1
value: 23.34
- type: recall_at_10
value: 49.856
- type: recall_at_100
value: 75.334
- type: recall_at_1000
value: 91.156
- type: recall_at_3
value: 36.497
- type: recall_at_5
value: 42.769
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.097
- type: map_at_10
value: 34.599999999999994
- type: map_at_100
value: 36.174
- type: map_at_1000
value: 36.398
- type: map_at_3
value: 31.781
- type: map_at_5
value: 33.22
- type: mrr_at_1
value: 31.225
- type: mrr_at_10
value: 39.873
- type: mrr_at_100
value: 40.853
- type: mrr_at_1000
value: 40.904
- type: mrr_at_3
value: 37.681
- type: mrr_at_5
value: 38.669
- type: ndcg_at_1
value: 31.225
- type: ndcg_at_10
value: 40.586
- type: ndcg_at_100
value: 46.226
- type: ndcg_at_1000
value: 48.788
- type: ndcg_at_3
value: 36.258
- type: ndcg_at_5
value: 37.848
- type: precision_at_1
value: 31.225
- type: precision_at_10
value: 7.707999999999999
- type: precision_at_100
value: 1.536
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 17.26
- type: precision_at_5
value: 12.253
- type: recall_at_1
value: 25.097
- type: recall_at_10
value: 51.602000000000004
- type: recall_at_100
value: 76.854
- type: recall_at_1000
value: 93.303
- type: recall_at_3
value: 38.68
- type: recall_at_5
value: 43.258
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.689
- type: map_at_10
value: 25.291000000000004
- type: map_at_100
value: 26.262
- type: map_at_1000
value: 26.372
- type: map_at_3
value: 22.916
- type: map_at_5
value: 24.315
- type: mrr_at_1
value: 19.409000000000002
- type: mrr_at_10
value: 27.233
- type: mrr_at_100
value: 28.109
- type: mrr_at_1000
value: 28.192
- type: mrr_at_3
value: 24.892
- type: mrr_at_5
value: 26.278000000000002
- type: ndcg_at_1
value: 19.409000000000002
- type: ndcg_at_10
value: 29.809
- type: ndcg_at_100
value: 34.936
- type: ndcg_at_1000
value: 37.852000000000004
- type: ndcg_at_3
value: 25.179000000000002
- type: ndcg_at_5
value: 27.563
- type: precision_at_1
value: 19.409000000000002
- type: precision_at_10
value: 4.861
- type: precision_at_100
value: 0.8
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 11.029
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 17.689
- type: recall_at_10
value: 41.724
- type: recall_at_100
value: 65.95299999999999
- type: recall_at_1000
value: 88.094
- type: recall_at_3
value: 29.621
- type: recall_at_5
value: 35.179
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.581
- type: map_at_10
value: 18.944
- type: map_at_100
value: 20.812
- type: map_at_1000
value: 21.002000000000002
- type: map_at_3
value: 15.661
- type: map_at_5
value: 17.502000000000002
- type: mrr_at_1
value: 23.388
- type: mrr_at_10
value: 34.263
- type: mrr_at_100
value: 35.364000000000004
- type: mrr_at_1000
value: 35.409
- type: mrr_at_3
value: 30.586000000000002
- type: mrr_at_5
value: 32.928000000000004
- type: ndcg_at_1
value: 23.388
- type: ndcg_at_10
value: 26.56
- type: ndcg_at_100
value: 34.248
- type: ndcg_at_1000
value: 37.779
- type: ndcg_at_3
value: 21.179000000000002
- type: ndcg_at_5
value: 23.504
- type: precision_at_1
value: 23.388
- type: precision_at_10
value: 8.476
- type: precision_at_100
value: 1.672
- type: precision_at_1000
value: 0.233
- type: precision_at_3
value: 15.852
- type: precision_at_5
value: 12.73
- type: recall_at_1
value: 10.581
- type: recall_at_10
value: 32.512
- type: recall_at_100
value: 59.313
- type: recall_at_1000
value: 79.25
- type: recall_at_3
value: 19.912
- type: recall_at_5
value: 25.832
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.35
- type: map_at_10
value: 20.134
- type: map_at_100
value: 28.975
- type: map_at_1000
value: 30.709999999999997
- type: map_at_3
value: 14.513000000000002
- type: map_at_5
value: 16.671
- type: mrr_at_1
value: 69.75
- type: mrr_at_10
value: 77.67699999999999
- type: mrr_at_100
value: 77.97500000000001
- type: mrr_at_1000
value: 77.985
- type: mrr_at_3
value: 76.292
- type: mrr_at_5
value: 77.179
- type: ndcg_at_1
value: 56.49999999999999
- type: ndcg_at_10
value: 42.226
- type: ndcg_at_100
value: 47.562
- type: ndcg_at_1000
value: 54.923
- type: ndcg_at_3
value: 46.564
- type: ndcg_at_5
value: 43.830000000000005
- type: precision_at_1
value: 69.75
- type: precision_at_10
value: 33.525
- type: precision_at_100
value: 11.035
- type: precision_at_1000
value: 2.206
- type: precision_at_3
value: 49.75
- type: precision_at_5
value: 42
- type: recall_at_1
value: 9.35
- type: recall_at_10
value: 25.793
- type: recall_at_100
value: 54.186
- type: recall_at_1000
value: 77.81
- type: recall_at_3
value: 15.770000000000001
- type: recall_at_5
value: 19.09
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.945
- type: f1
value: 42.07407842992542
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.04599999999999
- type: map_at_10
value: 80.718
- type: map_at_100
value: 80.961
- type: map_at_1000
value: 80.974
- type: map_at_3
value: 79.49199999999999
- type: map_at_5
value: 80.32000000000001
- type: mrr_at_1
value: 76.388
- type: mrr_at_10
value: 85.214
- type: mrr_at_100
value: 85.302
- type: mrr_at_1000
value: 85.302
- type: mrr_at_3
value: 84.373
- type: mrr_at_5
value: 84.979
- type: ndcg_at_1
value: 76.388
- type: ndcg_at_10
value: 84.987
- type: ndcg_at_100
value: 85.835
- type: ndcg_at_1000
value: 86.04899999999999
- type: ndcg_at_3
value: 83.04
- type: ndcg_at_5
value: 84.22500000000001
- type: precision_at_1
value: 76.388
- type: precision_at_10
value: 10.35
- type: precision_at_100
value: 1.099
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 32.108
- type: precision_at_5
value: 20.033
- type: recall_at_1
value: 71.04599999999999
- type: recall_at_10
value: 93.547
- type: recall_at_100
value: 96.887
- type: recall_at_1000
value: 98.158
- type: recall_at_3
value: 88.346
- type: recall_at_5
value: 91.321
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.8
- type: map_at_10
value: 31.979999999999997
- type: map_at_100
value: 33.876
- type: map_at_1000
value: 34.056999999999995
- type: map_at_3
value: 28.067999999999998
- type: map_at_5
value: 30.066
- type: mrr_at_1
value: 38.735
- type: mrr_at_10
value: 47.749
- type: mrr_at_100
value: 48.605
- type: mrr_at_1000
value: 48.644999999999996
- type: mrr_at_3
value: 45.165
- type: mrr_at_5
value: 46.646
- type: ndcg_at_1
value: 38.735
- type: ndcg_at_10
value: 39.883
- type: ndcg_at_100
value: 46.983000000000004
- type: ndcg_at_1000
value: 50.043000000000006
- type: ndcg_at_3
value: 35.943000000000005
- type: ndcg_at_5
value: 37.119
- type: precision_at_1
value: 38.735
- type: precision_at_10
value: 10.940999999999999
- type: precision_at_100
value: 1.836
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 23.817
- type: precision_at_5
value: 17.346
- type: recall_at_1
value: 19.8
- type: recall_at_10
value: 47.082
- type: recall_at_100
value: 73.247
- type: recall_at_1000
value: 91.633
- type: recall_at_3
value: 33.201
- type: recall_at_5
value: 38.81
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.102999999999994
- type: map_at_10
value: 60.547
- type: map_at_100
value: 61.466
- type: map_at_1000
value: 61.526
- type: map_at_3
value: 56.973
- type: map_at_5
value: 59.244
- type: mrr_at_1
value: 76.205
- type: mrr_at_10
value: 82.816
- type: mrr_at_100
value: 83.002
- type: mrr_at_1000
value: 83.009
- type: mrr_at_3
value: 81.747
- type: mrr_at_5
value: 82.467
- type: ndcg_at_1
value: 76.205
- type: ndcg_at_10
value: 69.15
- type: ndcg_at_100
value: 72.297
- type: ndcg_at_1000
value: 73.443
- type: ndcg_at_3
value: 64.07000000000001
- type: ndcg_at_5
value: 66.96600000000001
- type: precision_at_1
value: 76.205
- type: precision_at_10
value: 14.601
- type: precision_at_100
value: 1.7049999999999998
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 41.202
- type: precision_at_5
value: 27.006000000000004
- type: recall_at_1
value: 38.102999999999994
- type: recall_at_10
value: 73.005
- type: recall_at_100
value: 85.253
- type: recall_at_1000
value: 92.795
- type: recall_at_3
value: 61.803
- type: recall_at_5
value: 67.515
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 86.15
- type: ap
value: 80.36282825265391
- type: f1
value: 86.07368510726472
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.6
- type: map_at_10
value: 34.887
- type: map_at_100
value: 36.069
- type: map_at_1000
value: 36.115
- type: map_at_3
value: 31.067
- type: map_at_5
value: 33.300000000000004
- type: mrr_at_1
value: 23.238
- type: mrr_at_10
value: 35.47
- type: mrr_at_100
value: 36.599
- type: mrr_at_1000
value: 36.64
- type: mrr_at_3
value: 31.735999999999997
- type: mrr_at_5
value: 33.939
- type: ndcg_at_1
value: 23.252
- type: ndcg_at_10
value: 41.765
- type: ndcg_at_100
value: 47.402
- type: ndcg_at_1000
value: 48.562
- type: ndcg_at_3
value: 34.016999999999996
- type: ndcg_at_5
value: 38.016
- type: precision_at_1
value: 23.252
- type: precision_at_10
value: 6.569
- type: precision_at_100
value: 0.938
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.479000000000001
- type: precision_at_5
value: 10.722
- type: recall_at_1
value: 22.6
- type: recall_at_10
value: 62.919000000000004
- type: recall_at_100
value: 88.82
- type: recall_at_1000
value: 97.71600000000001
- type: recall_at_3
value: 41.896
- type: recall_at_5
value: 51.537
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.69357045143639
- type: f1
value: 93.55489858177597
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.31235750114
- type: f1
value: 57.891491963121155
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.04303967720243
- type: f1
value: 70.51516022297616
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.65299260255549
- type: f1
value: 77.49059766538576
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.458906115906597
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.9851513122443
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.2916268497217
- type: mrr
value: 32.328276715593816
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.3740000000000006
- type: map_at_10
value: 13.089999999999998
- type: map_at_100
value: 16.512
- type: map_at_1000
value: 18.014
- type: map_at_3
value: 9.671000000000001
- type: map_at_5
value: 11.199
- type: mrr_at_1
value: 46.749
- type: mrr_at_10
value: 55.367
- type: mrr_at_100
value: 56.021
- type: mrr_at_1000
value: 56.058
- type: mrr_at_3
value: 53.30200000000001
- type: mrr_at_5
value: 54.773
- type: ndcg_at_1
value: 45.046
- type: ndcg_at_10
value: 35.388999999999996
- type: ndcg_at_100
value: 32.175
- type: ndcg_at_1000
value: 41.018
- type: ndcg_at_3
value: 40.244
- type: ndcg_at_5
value: 38.267
- type: precision_at_1
value: 46.749
- type: precision_at_10
value: 26.563
- type: precision_at_100
value: 8.074
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 37.358000000000004
- type: precision_at_5
value: 33.003
- type: recall_at_1
value: 6.3740000000000006
- type: recall_at_10
value: 16.805999999999997
- type: recall_at_100
value: 31.871
- type: recall_at_1000
value: 64.098
- type: recall_at_3
value: 10.383000000000001
- type: recall_at_5
value: 13.166
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.847
- type: map_at_10
value: 50.532
- type: map_at_100
value: 51.504000000000005
- type: map_at_1000
value: 51.528
- type: map_at_3
value: 46.219
- type: map_at_5
value: 48.868
- type: mrr_at_1
value: 39.137
- type: mrr_at_10
value: 53.157
- type: mrr_at_100
value: 53.839999999999996
- type: mrr_at_1000
value: 53.857
- type: mrr_at_3
value: 49.667
- type: mrr_at_5
value: 51.847
- type: ndcg_at_1
value: 39.108
- type: ndcg_at_10
value: 58.221000000000004
- type: ndcg_at_100
value: 62.021
- type: ndcg_at_1000
value: 62.57
- type: ndcg_at_3
value: 50.27199999999999
- type: ndcg_at_5
value: 54.623999999999995
- type: precision_at_1
value: 39.108
- type: precision_at_10
value: 9.397
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 22.644000000000002
- type: precision_at_5
value: 16.141
- type: recall_at_1
value: 34.847
- type: recall_at_10
value: 78.945
- type: recall_at_100
value: 94.793
- type: recall_at_1000
value: 98.904
- type: recall_at_3
value: 58.56
- type: recall_at_5
value: 68.535
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.728
- type: map_at_10
value: 82.537
- type: map_at_100
value: 83.218
- type: map_at_1000
value: 83.238
- type: map_at_3
value: 79.586
- type: map_at_5
value: 81.416
- type: mrr_at_1
value: 79.17999999999999
- type: mrr_at_10
value: 85.79299999999999
- type: mrr_at_100
value: 85.937
- type: mrr_at_1000
value: 85.938
- type: mrr_at_3
value: 84.748
- type: mrr_at_5
value: 85.431
- type: ndcg_at_1
value: 79.17
- type: ndcg_at_10
value: 86.555
- type: ndcg_at_100
value: 88.005
- type: ndcg_at_1000
value: 88.146
- type: ndcg_at_3
value: 83.557
- type: ndcg_at_5
value: 85.152
- type: precision_at_1
value: 79.17
- type: precision_at_10
value: 13.163
- type: precision_at_100
value: 1.52
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.53
- type: precision_at_5
value: 24.046
- type: recall_at_1
value: 68.728
- type: recall_at_10
value: 94.217
- type: recall_at_100
value: 99.295
- type: recall_at_1000
value: 99.964
- type: recall_at_3
value: 85.646
- type: recall_at_5
value: 90.113
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.15680266226348
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.4318549229047
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.353
- type: map_at_10
value: 10.956000000000001
- type: map_at_100
value: 12.873999999999999
- type: map_at_1000
value: 13.177
- type: map_at_3
value: 7.854
- type: map_at_5
value: 9.327
- type: mrr_at_1
value: 21.4
- type: mrr_at_10
value: 31.948999999999998
- type: mrr_at_100
value: 33.039
- type: mrr_at_1000
value: 33.106
- type: mrr_at_3
value: 28.449999999999996
- type: mrr_at_5
value: 30.535
- type: ndcg_at_1
value: 21.4
- type: ndcg_at_10
value: 18.694
- type: ndcg_at_100
value: 26.275
- type: ndcg_at_1000
value: 31.836
- type: ndcg_at_3
value: 17.559
- type: ndcg_at_5
value: 15.372
- type: precision_at_1
value: 21.4
- type: precision_at_10
value: 9.790000000000001
- type: precision_at_100
value: 2.0709999999999997
- type: precision_at_1000
value: 0.34099999999999997
- type: precision_at_3
value: 16.467000000000002
- type: precision_at_5
value: 13.54
- type: recall_at_1
value: 4.353
- type: recall_at_10
value: 19.892000000000003
- type: recall_at_100
value: 42.067
- type: recall_at_1000
value: 69.268
- type: recall_at_3
value: 10.042
- type: recall_at_5
value: 13.741999999999999
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.75433886279843
- type: cos_sim_spearman
value: 78.29727771767095
- type: euclidean_pearson
value: 80.83057828506621
- type: euclidean_spearman
value: 78.35203149750356
- type: manhattan_pearson
value: 80.7403553891142
- type: manhattan_spearman
value: 78.33670488531051
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.59999465280839
- type: cos_sim_spearman
value: 75.79279003980383
- type: euclidean_pearson
value: 82.29895375956758
- type: euclidean_spearman
value: 77.33856514102094
- type: manhattan_pearson
value: 82.22694214534756
- type: manhattan_spearman
value: 77.3028993008695
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.09296929691297
- type: cos_sim_spearman
value: 83.58056936846941
- type: euclidean_pearson
value: 83.84067483060005
- type: euclidean_spearman
value: 84.45155680480985
- type: manhattan_pearson
value: 83.82353052971942
- type: manhattan_spearman
value: 84.43030567861112
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.74616852320915
- type: cos_sim_spearman
value: 79.948683747966
- type: euclidean_pearson
value: 81.55702283757084
- type: euclidean_spearman
value: 80.1721505114231
- type: manhattan_pearson
value: 81.52251518619441
- type: manhattan_spearman
value: 80.1469800135577
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.97170104226318
- type: cos_sim_spearman
value: 88.82021731518206
- type: euclidean_pearson
value: 87.92950547187615
- type: euclidean_spearman
value: 88.67043634645866
- type: manhattan_pearson
value: 87.90668112827639
- type: manhattan_spearman
value: 88.64471082785317
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.02790375770599
- type: cos_sim_spearman
value: 84.46308496590792
- type: euclidean_pearson
value: 84.29430000414911
- type: euclidean_spearman
value: 84.77298303589936
- type: manhattan_pearson
value: 84.23919291368665
- type: manhattan_spearman
value: 84.75272234871308
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.62885108477064
- type: cos_sim_spearman
value: 87.58456196391622
- type: euclidean_pearson
value: 88.2602775281007
- type: euclidean_spearman
value: 87.51556278299846
- type: manhattan_pearson
value: 88.11224053672842
- type: manhattan_spearman
value: 87.4336094383095
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.98187965128411
- type: cos_sim_spearman
value: 64.0653163219731
- type: euclidean_pearson
value: 62.30616725924099
- type: euclidean_spearman
value: 61.556971332295916
- type: manhattan_pearson
value: 62.07642330128549
- type: manhattan_spearman
value: 61.155494129828
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.6089703921826
- type: cos_sim_spearman
value: 86.52303197250791
- type: euclidean_pearson
value: 85.95801955963246
- type: euclidean_spearman
value: 86.25242424112962
- type: manhattan_pearson
value: 85.88829100470312
- type: manhattan_spearman
value: 86.18742955805165
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.02282098487036
- type: mrr
value: 95.05126409538174
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.928
- type: map_at_10
value: 67.308
- type: map_at_100
value: 67.89500000000001
- type: map_at_1000
value: 67.91199999999999
- type: map_at_3
value: 65.091
- type: map_at_5
value: 66.412
- type: mrr_at_1
value: 58.667
- type: mrr_at_10
value: 68.401
- type: mrr_at_100
value: 68.804
- type: mrr_at_1000
value: 68.819
- type: mrr_at_3
value: 66.72200000000001
- type: mrr_at_5
value: 67.72200000000001
- type: ndcg_at_1
value: 58.667
- type: ndcg_at_10
value: 71.944
- type: ndcg_at_100
value: 74.464
- type: ndcg_at_1000
value: 74.82799999999999
- type: ndcg_at_3
value: 68.257
- type: ndcg_at_5
value: 70.10300000000001
- type: precision_at_1
value: 58.667
- type: precision_at_10
value: 9.533
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 27.222
- type: precision_at_5
value: 17.533
- type: recall_at_1
value: 55.928
- type: recall_at_10
value: 84.65
- type: recall_at_100
value: 96.267
- type: recall_at_1000
value: 99
- type: recall_at_3
value: 74.656
- type: recall_at_5
value: 79.489
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 94.5795129511524
- type: cos_sim_f1
value: 89.34673366834171
- type: cos_sim_precision
value: 89.79797979797979
- type: cos_sim_recall
value: 88.9
- type: dot_accuracy
value: 99.53465346534654
- type: dot_ap
value: 81.56492504352725
- type: dot_f1
value: 76.33816908454227
- type: dot_precision
value: 76.37637637637637
- type: dot_recall
value: 76.3
- type: euclidean_accuracy
value: 99.78514851485149
- type: euclidean_ap
value: 94.59134620408962
- type: euclidean_f1
value: 88.96484375
- type: euclidean_precision
value: 86.92748091603053
- type: euclidean_recall
value: 91.10000000000001
- type: manhattan_accuracy
value: 99.78415841584159
- type: manhattan_ap
value: 94.5190197328845
- type: manhattan_f1
value: 88.84462151394423
- type: manhattan_precision
value: 88.4920634920635
- type: manhattan_recall
value: 89.2
- type: max_accuracy
value: 99.79009900990098
- type: max_ap
value: 94.59134620408962
- type: max_f1
value: 89.34673366834171
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.1487505617497
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.502518166001856
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.33775480236701
- type: mrr
value: 51.17302223919871
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.561111309808208
- type: cos_sim_spearman
value: 30.2839254379273
- type: dot_pearson
value: 29.560242291401973
- type: dot_spearman
value: 30.51527274679116
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.215
- type: map_at_10
value: 1.752
- type: map_at_100
value: 9.258
- type: map_at_1000
value: 23.438
- type: map_at_3
value: 0.6
- type: map_at_5
value: 0.968
- type: mrr_at_1
value: 84
- type: mrr_at_10
value: 91.333
- type: mrr_at_100
value: 91.333
- type: mrr_at_1000
value: 91.333
- type: mrr_at_3
value: 91.333
- type: mrr_at_5
value: 91.333
- type: ndcg_at_1
value: 75
- type: ndcg_at_10
value: 69.596
- type: ndcg_at_100
value: 51.970000000000006
- type: ndcg_at_1000
value: 48.864999999999995
- type: ndcg_at_3
value: 73.92699999999999
- type: ndcg_at_5
value: 73.175
- type: precision_at_1
value: 84
- type: precision_at_10
value: 74
- type: precision_at_100
value: 53.2
- type: precision_at_1000
value: 21.836
- type: precision_at_3
value: 79.333
- type: precision_at_5
value: 78.4
- type: recall_at_1
value: 0.215
- type: recall_at_10
value: 1.9609999999999999
- type: recall_at_100
value: 12.809999999999999
- type: recall_at_1000
value: 46.418
- type: recall_at_3
value: 0.6479999999999999
- type: recall_at_5
value: 1.057
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.066
- type: map_at_10
value: 10.508000000000001
- type: map_at_100
value: 16.258
- type: map_at_1000
value: 17.705000000000002
- type: map_at_3
value: 6.157
- type: map_at_5
value: 7.510999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 48.786
- type: mrr_at_100
value: 49.619
- type: mrr_at_1000
value: 49.619
- type: mrr_at_3
value: 45.918
- type: mrr_at_5
value: 46.837
- type: ndcg_at_1
value: 31.633
- type: ndcg_at_10
value: 26.401999999999997
- type: ndcg_at_100
value: 37.139
- type: ndcg_at_1000
value: 48.012
- type: ndcg_at_3
value: 31.875999999999998
- type: ndcg_at_5
value: 27.383000000000003
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 22.857
- type: precision_at_100
value: 7.611999999999999
- type: precision_at_1000
value: 1.492
- type: precision_at_3
value: 33.333
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 3.066
- type: recall_at_10
value: 16.239
- type: recall_at_100
value: 47.29
- type: recall_at_1000
value: 81.137
- type: recall_at_3
value: 7.069
- type: recall_at_5
value: 9.483
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 72.1126
- type: ap
value: 14.710862719285753
- type: f1
value: 55.437808972378846
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.39049235993209
- type: f1
value: 60.69810537250234
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 48.15576640316866
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.52917684925792
- type: cos_sim_ap
value: 75.97497873817315
- type: cos_sim_f1
value: 70.01151926276718
- type: cos_sim_precision
value: 67.98409147402435
- type: cos_sim_recall
value: 72.16358839050132
- type: dot_accuracy
value: 82.47004828038385
- type: dot_ap
value: 62.48739894974198
- type: dot_f1
value: 59.13107511045656
- type: dot_precision
value: 55.27765029830197
- type: dot_recall
value: 63.562005277044854
- type: euclidean_accuracy
value: 86.46361089586935
- type: euclidean_ap
value: 75.59282886839452
- type: euclidean_f1
value: 69.6465443945099
- type: euclidean_precision
value: 64.52847175331982
- type: euclidean_recall
value: 75.64643799472296
- type: manhattan_accuracy
value: 86.43380818978363
- type: manhattan_ap
value: 75.5742420974403
- type: manhattan_f1
value: 69.8636926889715
- type: manhattan_precision
value: 65.8644859813084
- type: manhattan_recall
value: 74.37994722955145
- type: max_accuracy
value: 86.52917684925792
- type: max_ap
value: 75.97497873817315
- type: max_f1
value: 70.01151926276718
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.29056545193464
- type: cos_sim_ap
value: 86.63028865482376
- type: cos_sim_f1
value: 79.18166458532285
- type: cos_sim_precision
value: 75.70585756426465
- type: cos_sim_recall
value: 82.99199260856174
- type: dot_accuracy
value: 85.23305002522606
- type: dot_ap
value: 76.0482687263196
- type: dot_f1
value: 70.80484330484332
- type: dot_precision
value: 65.86933474688577
- type: dot_recall
value: 76.53988296889437
- type: euclidean_accuracy
value: 89.26145845461248
- type: euclidean_ap
value: 86.54073288416006
- type: euclidean_f1
value: 78.9721371479794
- type: euclidean_precision
value: 76.68649354417525
- type: euclidean_recall
value: 81.39821373575609
- type: manhattan_accuracy
value: 89.22847052431405
- type: manhattan_ap
value: 86.51250729037905
- type: manhattan_f1
value: 78.94601825044894
- type: manhattan_precision
value: 75.32694594027555
- type: manhattan_recall
value: 82.93039728980598
- type: max_accuracy
value: 89.29056545193464
- type: max_ap
value: 86.63028865482376
- type: max_f1
value: 79.18166458532285
language:
- en
license: mit
library_name: sentence-transformers
pipeline_tag: feature-extraction
---
# E5-base-v2
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 12 layers and the embedding size is 768.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-base-v2')
model = AutoModel.from_pretrained('intfloat/e5-base-v2')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
|
HeshamMamdouh/mbart-finetune-ar-xlsum-fine-tuned
|
HeshamMamdouh
| 2023-07-07T21:14:04Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"mbart",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-07T21:11:33Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: mbart-finetune-ar-xlsum-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mbart-finetune-ar-xlsum-fine-tuned
This model is a fine-tuned version of [eslamxm/mbart-finetune-ar-xlsum](https://huggingface.co/eslamxm/mbart-finetune-ar-xlsum) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8386
- Validation Loss: 6.2675
- Train Lr: 2e-05
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Lr | Epoch |
|:----------:|:---------------:|:--------:|:-----:|
| 7.0615 | 6.1894 | 2e-05 | 0 |
| 5.7395 | 5.8670 | 2e-05 | 1 |
| 5.2896 | 5.7020 | 2e-05 | 2 |
| 4.9490 | 5.6279 | 2e-05 | 3 |
| 4.6278 | 5.6189 | 2e-05 | 4 |
| 4.3330 | 5.6275 | 2e-05 | 5 |
| 3.9812 | 5.7291 | 2e-05 | 6 |
| 3.6283 | 5.8438 | 2e-05 | 7 |
| 3.2183 | 6.0378 | 2e-05 | 8 |
| 2.8386 | 6.2675 | 2e-05 | 9 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
C-Lo/finetuning-sentiment-unfiltered-dataset
|
C-Lo
| 2023-07-07T21:06:12Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T21:03:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-unfiltered-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-unfiltered-dataset
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
grace-pro/bert-finetuned-ner
|
grace-pro
| 2023-07-07T20:58:47Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-05T19:25:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9349917081260365
- name: Recall
type: recall
value: 0.9488387748232918
- name: F1
type: f1
value: 0.941864350150351
- name: Accuracy
type: accuracy
value: 0.9863130629304763
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0596
- Precision: 0.9350
- Recall: 0.9488
- F1: 0.9419
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0881 | 1.0 | 1756 | 0.0666 | 0.9162 | 0.9342 | 0.9251 | 0.9825 |
| 0.0332 | 2.0 | 3512 | 0.0608 | 0.9272 | 0.9478 | 0.9374 | 0.9860 |
| 0.0178 | 3.0 | 5268 | 0.0596 | 0.9350 | 0.9488 | 0.9419 | 0.9863 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dracero/ppo-LunarLander-v2
|
dracero
| 2023-07-07T20:56:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T20:54:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.61 +/- 16.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Diego's code
```python
import gymnasium as gym
from stable_baselines3.common.vec_env import DummyVecEnv
from stable_baselines3.common.env_util import make_vec_env
from huggingface_sb3 import package_to_hub
## TODO: Define a repo_id
## repo_id is the id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2
repo_id =
# TODO: Define the name of the environment
env_id =
# Create the evaluation env and set the render_mode="rgb_array"
eval_env = DummyVecEnv([lambda: Monitor(gym.make(env_id, render_mode="rgb_array"))])
# TODO: Define the model architecture we used
model_architecture = ""
## TODO: Define the commit message
commit_message = ""
# method save, evaluate, generate a model card and record a replay video of your agent before pushing the repo to the hub
package_to_hub(model=model, # Our trained model
model_name=model_name, # The name of our trained model
model_architecture=model_architecture, # The model architecture we used: in our case PPO
env_id=env_id, # Name of the environment
eval_env=eval_env, # Evaluation Environment
repo_id=repo_id, # id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name} for instance ThomasSimonini/ppo-LunarLander-v2
commit_message=commit_message)
...
```
|
vimassaru/segformer-b2-finetuned-teeth-segmentation
|
vimassaru
| 2023-07-07T20:51:12Z | 5 | 0 |
transformers
|
[
"transformers",
"image-segmentation",
"pt",
"dataset:vimassaru/teethsegmentation",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-07-07T19:33:25Z |
---
datasets:
- vimassaru/teethsegmentation
language:
- pt
metrics:
- mean_iou
library_name: transformers
pipeline_tag: image-segmentation
---
|
Tyffuss86/Polsk
|
Tyffuss86
| 2023-07-07T20:46:17Z | 0 | 0 | null |
[
"text-to-video",
"region:us"
] |
text-to-video
| 2023-07-07T20:43:27Z |
---
pipeline_tag: text-to-video
---
|
ALM-AHME/beit-large-patch16-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20
|
ALM-AHME
| 2023-07-07T20:39:27Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-07T18:00:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-large-patch16-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: Augmented-Final
split: train
args: Augmented-Final
metrics:
- name: Accuracy
type: accuracy
value: 0.9907502569373073
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-large-patch16-224-finetuned-Lesion-Classification-HAM10000-AH-60-20-20
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0434
- Accuracy: 0.9908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.9
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9688 | 1.0 | 122 | 1.8425 | 0.2775 |
| 1.4822 | 2.0 | 244 | 1.3833 | 0.5457 |
| 1.1239 | 3.0 | 366 | 0.9321 | 0.6680 |
| 0.8686 | 4.0 | 488 | 0.6691 | 0.7698 |
| 0.5234 | 5.0 | 610 | 0.4872 | 0.8335 |
| 0.5246 | 6.0 | 732 | 0.3586 | 0.8736 |
| 0.3691 | 7.0 | 854 | 0.3134 | 0.8993 |
| 0.4708 | 8.0 | 976 | 0.2069 | 0.9394 |
| 0.1694 | 9.0 | 1098 | 0.1832 | 0.9414 |
| 0.2749 | 10.0 | 1220 | 0.1198 | 0.9640 |
| 0.1777 | 11.0 | 1342 | 0.0845 | 0.9733 |
| 0.1529 | 12.0 | 1464 | 0.0434 | 0.9908 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jabedbd/lol-diffusions
|
jabedbd
| 2023-07-07T20:15:14Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T18:23:11Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
|
imtiaz114/bert-finetuned-ner
|
imtiaz114
| 2023-07-07T20:14:46Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-07T06:36:59Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: imtiaz114/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# imtiaz114/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5417
- Validation Loss: 0.4322
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 231, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.5417 | 0.4322 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Jade1211/textual_inversion_baby
|
Jade1211
| 2023-07-07T20:10:18Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T18:08:50Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Jade1211/textual_inversion_baby
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
GalSarid/setfit-movie-genre-sentence-t5-xl
|
GalSarid
| 2023-07-07T20:04:50Z | 4 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"t5",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-04T21:34:54Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# GalSarid/setfit-movie-genre-sentence-t5-xl
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("GalSarid/setfit-movie-genre-sentence-t5-xl")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
whackett/textual_inversion_cat
|
whackett
| 2023-07-07T19:53:25Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T17:12:21Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - whackett/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
Humayoun/ThesisDonutOPD2
|
Humayoun
| 2023-07-07T19:50:17Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-07T08:48:09Z |
---
license: mit
base_model: humayoun/ThesisDonutOPD
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: ThesisDonutOPD2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ThesisDonutOPD2
This model is a fine-tuned version of [humayoun/ThesisDonutOPD](https://huggingface.co/humayoun/ThesisDonutOPD) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gbellamy/Reinforce-Pixelcopter-PLE-v0
|
gbellamy
| 2023-07-07T19:48:12Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T00:29:22Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 60.90 +/- 54.12
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
c72599/ppo-PyramidsRND
|
c72599
| 2023-07-07T19:46:51Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-07T19:46:26Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: c72599/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ashna022/ppo-lunarLander
|
ashna022
| 2023-07-07T19:46:48Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T19:31:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 238.84 +/- 66.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.evaluation import evaluate_policy
from huggingface_sb3 import load_from_hub
# Get model
repo_id = "ashna022/ppo-lunarLander"
filename = "ppo-lunarLander"
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
# Evaluate
eval_env = Monitor(gym.make("LunarLander-v2"))
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
```
|
aaaririri/distilbert-base-uncased-finetuned-cola
|
aaaririri
| 2023-07-07T19:36:26Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T18:10:42Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aaaririri/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aaaririri/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1971
- Validation Loss: 0.5365
- Train Matthews Correlation: 0.5219
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5210 | 0.4488 | 0.4980 | 0 |
| 0.3234 | 0.4759 | 0.4939 | 1 |
| 0.1971 | 0.5365 | 0.5219 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jarguello76/distilhubert-finetuned-gtzan
|
jarguello76
| 2023-07-07T19:27:59Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-07T19:24:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5391
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1051 | 1.0 | 113 | 2.1497 | 0.37 |
| 1.8425 | 2.0 | 226 | 1.8414 | 0.51 |
| 1.529 | 3.0 | 339 | 1.4372 | 0.64 |
| 1.1083 | 4.0 | 452 | 1.1113 | 0.74 |
| 0.8602 | 5.0 | 565 | 0.8216 | 0.79 |
| 0.5928 | 6.0 | 678 | 0.7559 | 0.77 |
| 0.3821 | 7.0 | 791 | 0.6388 | 0.81 |
| 0.4732 | 8.0 | 904 | 0.5476 | 0.86 |
| 0.303 | 9.0 | 1017 | 0.5160 | 0.9 |
| 0.3458 | 10.0 | 1130 | 0.5391 | 0.87 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
XERO5000/my-pet-dog-and-cat
|
XERO5000
| 2023-07-07T19:03:34Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T18:59:55Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-and-Cat Dreambooth model trained by XERO5000 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CVRGU39
Sample pictures of this concept:

|
andressrg/textual_inversion_meal_3
|
andressrg
| 2023-07-07T18:56:04Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T18:43:57Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - andressrg/textual_inversion_meal_3
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
Manab/donut-base-my_model_1_new
|
Manab
| 2023-07-07T18:41:04Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-03T03:56:20Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-my_model_1_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-my_model_1_new
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5504 | 0.69 | 50 | 0.3411 |
| 0.6696 | 1.39 | 100 | 0.3411 |
| 0.6016 | 2.08 | 150 | 0.3411 |
| 0.5543 | 2.78 | 200 | 0.3411 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cczhong/internlm-chat-7b-4bit-gptq-safetensor
|
cczhong
| 2023-07-07T18:38:20Z | 6 | 3 |
transformers
|
[
"transformers",
"internlm",
"feature-extraction",
"custom_code",
"region:us"
] |
feature-extraction
| 2023-07-07T17:51:49Z |
please use cczhong/internlm-chat-7b-4bit-gptq before I figure out why it did not work
# how to use
need install "pip install git+https://github.com/cczhong11/AutoGPTQ" before https://github.com/PanQiWei/AutoGPTQ/pull/189 got merged
```
from transformers import AutoTokenizer, AutoModelForCausalLM
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
quantized_model_dir = "cczhong/internlm-chat-7b-4bit-gptq-safetensor"
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True)
model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir, device="cuda:0",trust_remote_code=True)
response, history = model.chat(tokenizer, "你好", history=[])
```
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g025
|
jordyvl
| 2023-07-07T18:32:59Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T16:23:14Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g025
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g025
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0715
- Accuracy: 0.7275
- Exit 0 Accuracy: 0.1125
- Exit 1 Accuracy: 0.1525
- Exit 2 Accuracy: 0.185
- Exit 3 Accuracy: 0.0625
- Exit 4 Accuracy: 0.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.72 | 2 | 2.7601 | 0.11 | 0.1025 | 0.0675 | 0.0825 | 0.0625 | 0.0625 |
| No log | 1.72 | 4 | 2.7312 | 0.115 | 0.1025 | 0.065 | 0.085 | 0.0625 | 0.0625 |
| No log | 2.72 | 6 | 2.6966 | 0.1325 | 0.1025 | 0.06 | 0.0975 | 0.0625 | 0.0625 |
| No log | 3.72 | 8 | 2.6638 | 0.1725 | 0.1375 | 0.055 | 0.115 | 0.0625 | 0.0625 |
| No log | 4.72 | 10 | 2.6223 | 0.195 | 0.1375 | 0.0575 | 0.1125 | 0.0625 | 0.0625 |
| No log | 5.72 | 12 | 2.5770 | 0.215 | 0.13 | 0.08 | 0.115 | 0.0625 | 0.0625 |
| No log | 6.72 | 14 | 2.5537 | 0.21 | 0.12 | 0.08 | 0.1125 | 0.0625 | 0.0625 |
| No log | 7.72 | 16 | 2.5364 | 0.22 | 0.1275 | 0.09 | 0.1175 | 0.0625 | 0.0625 |
| No log | 8.72 | 18 | 2.5008 | 0.2475 | 0.125 | 0.095 | 0.12 | 0.0625 | 0.0625 |
| No log | 9.72 | 20 | 2.4477 | 0.2675 | 0.115 | 0.0925 | 0.115 | 0.0625 | 0.0625 |
| No log | 10.72 | 22 | 2.3972 | 0.3075 | 0.115 | 0.12 | 0.1175 | 0.0625 | 0.0625 |
| No log | 11.72 | 24 | 2.3565 | 0.32 | 0.1125 | 0.11 | 0.1125 | 0.0625 | 0.0625 |
| No log | 12.72 | 26 | 2.2957 | 0.3425 | 0.1075 | 0.115 | 0.115 | 0.0625 | 0.0625 |
| No log | 13.72 | 28 | 2.2355 | 0.3575 | 0.105 | 0.115 | 0.1175 | 0.0625 | 0.0625 |
| No log | 14.72 | 30 | 2.1916 | 0.3625 | 0.1075 | 0.125 | 0.1275 | 0.0625 | 0.0625 |
| No log | 15.72 | 32 | 2.1467 | 0.3825 | 0.1075 | 0.13 | 0.1225 | 0.0625 | 0.0625 |
| No log | 16.72 | 34 | 2.0775 | 0.405 | 0.1075 | 0.1375 | 0.1225 | 0.0625 | 0.0625 |
| No log | 17.72 | 36 | 2.0176 | 0.435 | 0.1125 | 0.1375 | 0.1225 | 0.0625 | 0.0625 |
| No log | 18.72 | 38 | 1.9539 | 0.4725 | 0.115 | 0.1375 | 0.1225 | 0.0625 | 0.0625 |
| No log | 19.72 | 40 | 1.9007 | 0.485 | 0.105 | 0.14 | 0.1225 | 0.0625 | 0.0625 |
| No log | 20.72 | 42 | 1.8501 | 0.52 | 0.1075 | 0.14 | 0.1275 | 0.0625 | 0.0625 |
| No log | 21.72 | 44 | 1.7795 | 0.5475 | 0.1075 | 0.14 | 0.125 | 0.0625 | 0.0625 |
| No log | 22.72 | 46 | 1.7139 | 0.565 | 0.11 | 0.14 | 0.1275 | 0.0625 | 0.0625 |
| No log | 23.72 | 48 | 1.6892 | 0.57 | 0.1125 | 0.14 | 0.13 | 0.0625 | 0.0625 |
| No log | 24.72 | 50 | 1.6345 | 0.5875 | 0.11 | 0.1425 | 0.1275 | 0.0625 | 0.0625 |
| No log | 25.72 | 52 | 1.5737 | 0.5975 | 0.1125 | 0.1475 | 0.1275 | 0.0625 | 0.0625 |
| No log | 26.72 | 54 | 1.5422 | 0.6 | 0.1125 | 0.1475 | 0.135 | 0.0625 | 0.0625 |
| No log | 27.72 | 56 | 1.5227 | 0.6125 | 0.115 | 0.1475 | 0.1375 | 0.0625 | 0.0625 |
| No log | 28.72 | 58 | 1.4674 | 0.64 | 0.115 | 0.1475 | 0.1425 | 0.0625 | 0.0625 |
| No log | 29.72 | 60 | 1.4152 | 0.65 | 0.115 | 0.1475 | 0.1425 | 0.0625 | 0.0625 |
| No log | 30.72 | 62 | 1.4002 | 0.6575 | 0.115 | 0.1475 | 0.145 | 0.0625 | 0.0625 |
| No log | 31.72 | 64 | 1.3922 | 0.6625 | 0.115 | 0.145 | 0.145 | 0.0625 | 0.0625 |
| No log | 32.72 | 66 | 1.3489 | 0.6725 | 0.115 | 0.145 | 0.1475 | 0.0625 | 0.0625 |
| No log | 33.72 | 68 | 1.3166 | 0.68 | 0.115 | 0.1475 | 0.1475 | 0.0625 | 0.0625 |
| No log | 34.72 | 70 | 1.3028 | 0.685 | 0.1125 | 0.1475 | 0.1475 | 0.0625 | 0.0625 |
| No log | 35.72 | 72 | 1.2779 | 0.6975 | 0.1125 | 0.1475 | 0.1475 | 0.0625 | 0.0625 |
| No log | 36.72 | 74 | 1.2494 | 0.705 | 0.1125 | 0.1475 | 0.15 | 0.0625 | 0.0625 |
| No log | 37.72 | 76 | 1.2366 | 0.7025 | 0.1125 | 0.1475 | 0.15 | 0.0625 | 0.0625 |
| No log | 38.72 | 78 | 1.2214 | 0.705 | 0.1125 | 0.15 | 0.1525 | 0.0625 | 0.0625 |
| No log | 39.72 | 80 | 1.1999 | 0.7175 | 0.1125 | 0.1525 | 0.1525 | 0.0625 | 0.0625 |
| No log | 40.72 | 82 | 1.1793 | 0.7125 | 0.1125 | 0.1525 | 0.1575 | 0.0625 | 0.0625 |
| No log | 41.72 | 84 | 1.1680 | 0.7225 | 0.1125 | 0.1525 | 0.1575 | 0.0625 | 0.0625 |
| No log | 42.72 | 86 | 1.1625 | 0.7225 | 0.1125 | 0.1525 | 0.155 | 0.0625 | 0.0625 |
| No log | 43.72 | 88 | 1.1471 | 0.7175 | 0.1125 | 0.1525 | 0.1575 | 0.0625 | 0.0625 |
| No log | 44.72 | 90 | 1.1232 | 0.7275 | 0.1125 | 0.1525 | 0.1625 | 0.0625 | 0.0625 |
| No log | 45.72 | 92 | 1.1188 | 0.7275 | 0.1125 | 0.1525 | 0.1625 | 0.0625 | 0.0625 |
| No log | 46.72 | 94 | 1.1196 | 0.7275 | 0.1125 | 0.1525 | 0.1625 | 0.0625 | 0.0625 |
| No log | 47.72 | 96 | 1.1133 | 0.725 | 0.1125 | 0.15 | 0.1625 | 0.0625 | 0.0625 |
| No log | 48.72 | 98 | 1.1104 | 0.725 | 0.115 | 0.15 | 0.1625 | 0.0625 | 0.0625 |
| No log | 49.72 | 100 | 1.1047 | 0.73 | 0.115 | 0.15 | 0.165 | 0.0625 | 0.0625 |
| No log | 50.72 | 102 | 1.0973 | 0.7225 | 0.115 | 0.1525 | 0.17 | 0.0625 | 0.0625 |
| No log | 51.72 | 104 | 1.0866 | 0.7225 | 0.115 | 0.1525 | 0.175 | 0.0625 | 0.0625 |
| No log | 52.72 | 106 | 1.0845 | 0.73 | 0.1125 | 0.1525 | 0.1725 | 0.0625 | 0.0625 |
| No log | 53.72 | 108 | 1.0836 | 0.7275 | 0.1125 | 0.1525 | 0.1725 | 0.0625 | 0.0625 |
| No log | 54.72 | 110 | 1.0822 | 0.7225 | 0.1125 | 0.1525 | 0.1725 | 0.0625 | 0.0625 |
| No log | 55.72 | 112 | 1.0808 | 0.7275 | 0.1125 | 0.1525 | 0.18 | 0.0625 | 0.0625 |
| No log | 56.72 | 114 | 1.0766 | 0.725 | 0.1125 | 0.1525 | 0.18 | 0.0625 | 0.0625 |
| No log | 57.72 | 116 | 1.0738 | 0.73 | 0.1125 | 0.1525 | 0.1825 | 0.0625 | 0.0625 |
| No log | 58.72 | 118 | 1.0721 | 0.7275 | 0.1125 | 0.1525 | 0.185 | 0.0625 | 0.0625 |
| No log | 59.72 | 120 | 1.0715 | 0.7275 | 0.1125 | 0.1525 | 0.185 | 0.0625 | 0.0625 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
VK246/IC_ver1_coco_swin_gpt2_5pc_2e
|
VK246
| 2023-07-07T18:02:42Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:coco",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-07T10:28:59Z |
---
tags:
- generated_from_trainer
datasets:
- coco
metrics:
- rouge
- bleu
model-index:
- name: IC_ver1_coco_swin_gpt2_5pc_2e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IC_ver1_coco_swin_gpt2_5pc_2e
This model is a fine-tuned version of [](https://huggingface.co/) on the coco dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9700
- Rouge1: 36.317
- Rouge2: 12.0175
- Rougel: 33.7883
- Rougelsum: 33.7342
- Bleu: 6.5392
- Gen Len: 11.2887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 100
- eval_batch_size: 100
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:------:|:-------:|
| 1.29 | 0.71 | 200 | 1.0273 | 33.6256 | 10.3076 | 31.0868 | 31.0562 | 5.5088 | 11.2887 |
| 1.0193 | 1.41 | 400 | 0.9700 | 36.317 | 12.0175 | 33.7883 | 33.7342 | 6.5392 | 11.2887 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
HeshamMamdouh/bart-large-cnn-sum-fine-tuned
|
HeshamMamdouh
| 2023-07-07T17:51:08Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-07T17:50:13Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: bart-large-cnn-sum-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-sum-fine-tuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9008
- Validation Loss: 1.3671
- Train Lr: 7.3575857e-06
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 7.3575857e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Lr | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.9649 | 1.6269 | 2e-05 | 0 |
| 1.5294 | 1.4905 | 2e-05 | 1 |
| 1.4205 | 1.4433 | 2e-05 | 2 |
| 1.3544 | 1.4030 | 2e-05 | 3 |
| 1.3136 | 1.4009 | 2e-05 | 4 |
| 1.2611 | 1.3879 | 2e-05 | 5 |
| 1.2306 | 1.3604 | 2e-05 | 6 |
| 1.1927 | 1.3526 | 2e-05 | 7 |
| 1.1622 | 1.3445 | 2e-05 | 8 |
| 1.1242 | 1.3454 | 2e-05 | 9 |
| 1.0950 | 1.3505 | 1.8096747e-05 | 10 |
| 1.0654 | 1.3528 | 1.6374614e-05 | 11 |
| 1.0412 | 1.3494 | 1.4816363e-05 | 12 |
| 1.0166 | 1.3356 | 1.34063985e-05 | 13 |
| 0.9902 | 1.3303 | 1.213061e-05 | 14 |
| 0.9721 | 1.3331 | 1.09762295e-05 | 15 |
| 0.9436 | 1.3353 | 9.931703e-06 | 16 |
| 0.9317 | 1.3506 | 8.986576e-06 | 17 |
| 0.9211 | 1.3422 | 8.13139e-06 | 18 |
| 0.9008 | 1.3671 | 7.3575857e-06 | 19 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
susnato/whisper-small-dv
|
susnato
| 2023-07-07T17:50:09Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-07T16:14:04Z |
---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 15.130229161595437
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1878
- Wer Ortho: 66.1676
- Wer: 15.1302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1471 | 1.63 | 500 | 0.1878 | 66.1676 | 15.1302 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.2
|
c72599/ppo-SnowballTarget
|
c72599
| 2023-07-07T17:38:40Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-07T17:38:35Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: c72599/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-fp16
|
TheBloke
| 2023-07-07T17:15:13Z | 1,563 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"arxiv:2304.12244",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-07T17:12:09Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# WizardLM's WizardLM 13B V1.1 fp16
These are fp16 pytorch format model files for [WizardLM's WizardLM 13B V1.1](https://huggingface.co/WizardLM/WizardLM-13B-V1.1) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: WizardLM's WizardLM 13B V1.1
This is the **Full-Weight** of WizardLM-13B V1.1 model.
**Repository**: https://github.com/nlpxucan/WizardLM
**Twitter**: https://twitter.com/WizardLM_AI/status/1677282955490918401
- 🔥🔥🔥 [7/7/2023] We released **WizardLM V1.1** models. The **WizardLM-13B-V1.1** is here ([Demo_13B-V1.1](https://e8a06366ccd1c4d1.gradio.app), [Demo_13B-V1.1_bak-1](https://59da107262a25764.gradio.app), [Demo_13B-V1.1_bak-2](https://dfc5113f66739c80.gradio.app), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)). **WizardLM-7B-V1.1**, **WizardLM-30B-V1.1**, and **WizardLM-65B-V1.1** are coming soon. Please checkout the [Full Model Weights](https://huggingface.co/WizardLM) and [paper](https://arxiv.org/abs/2304.12244).
- 🔥🔥🔥 [7/7/2023] The **WizardLM-13B-V1.1** achieves **6.74** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **86.32%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **99.3%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
|
A-Alsad/Falcon-40B-Instruct-SAT
|
A-Alsad
| 2023-07-07T17:07:27Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-07T16:49:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
tyavika/LR1E4-BS8-Distilbert-QA-Pytorch-FULL
|
tyavika
| 2023-07-07T17:00:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-07T14:30:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LR1E4-BS8-Distilbert-QA-Pytorch-FULL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LR1E4-BS8-Distilbert-QA-Pytorch-FULL
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4302 | 1.0 | 3290 | 1.2902 |
| 1.0058 | 2.0 | 6580 | 1.2750 |
| 0.6711 | 3.0 | 9870 | 1.4631 |
| 0.4224 | 4.0 | 13160 | 1.7269 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
kmin940/finetuning-sentiment-model-3000-samples
|
kmin940
| 2023-07-07T16:47:41Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T15:41:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8726114649681529
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6766
- Accuracy: 0.8667
- F1: 0.8726
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SarielSinLuo/bert-large-uncased-finetuned-mrpc
|
SarielSinLuo
| 2023-07-07T16:36:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T16:35:29Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-large-uncased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8651960784313726
- name: F1
type: f1
value: 0.9043478260869565
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-mrpc
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6561
- Accuracy: 0.8652
- F1: 0.9043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 230 | 0.3876 | 0.8186 | 0.8604 |
| No log | 2.0 | 460 | 0.4291 | 0.8480 | 0.8967 |
| 0.4195 | 3.0 | 690 | 0.6561 | 0.8652 | 0.9043 |
| 0.4195 | 4.0 | 920 | 0.7422 | 0.8554 | 0.8967 |
| 0.0948 | 5.0 | 1150 | 0.8161 | 0.8554 | 0.8956 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pobierz69/model-6b-read-desc
|
pobierz69
| 2023-07-07T16:27:23Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gptj",
"text-generation",
"text generation",
"conversational",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-07T15:28:14Z |
---
license: creativeml-openrail-m
language:
- en
thumbnail:
tags:
- text generation
- conversational
inference: false
---
# Pygmalion 6B (mirror)
THE ORIGINAL MODEL CAN BE FOUND [HERE](https://huggingface.co/PygmalionAI/pygmalion-6b)
## Model description
Pymalion 6B is a proof-of-concept dialogue model based on EleutherAI's [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B).
**Warning:** This model is **NOT** suitable for use by minors. It **will** output X-rated content under certain circumstances.
## Training data
The fine-tuning dataset consisted of 56MB of dialogue data gathered from multiple sources, which includes both real _and_ partially machine-generated conversations.
## Training procedure
Model weights were initialized from the `uft-6b` ConvoGPT model made available in [this commit](https://huggingface.co/hakurei/convogpt/tree/41b67bfddb6cd97070ffddf708e9720c9cb8d224/6b-uft).
The model was then further fine-tuned on ~48.5 million tokens for ~5k steps on 4 NVIDIA A40s using DeepSpeed.
## Intended use
### The easy way
We provide a notebook with a Gradio UI for playing around with the model without having to manually format inputs. This notebook can be found [here](https://github.com/PygmalionAI/gradio-ui/blob/master/notebooks/GPU.ipynb).
### The manual way
The model can be used as a regular text generation model, but it'll perform best if the input prompt adheres to the following format:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
<START>
[DIALOGUE HISTORY]
You: [Your input message here]
[CHARACTER]:
```
Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is chat history so the model can have some conversational context to draw from. Ideally it'll be pairs of messages like:
```
[CHARACTER]: [some dialogue here]
You: [your response to the dialogue above]
```
Apart from chat history, you can also just add example conversations in `[DIALOGUE HISTORY]` to show how the character should speak - ideally at the beginning, so it doesn't get confused as to what's conversation history vs. character definition.
## Known issues
We haven't played around with the model enough to enumerate them. Feel free to give us some feedback!
|
Nebulon/colab
|
Nebulon
| 2023-07-07T16:25:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-07T14:37:14Z |
---
license: creativeml-openrail-m
---
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g050
|
jordyvl
| 2023-07-07T16:22:33Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T14:12:59Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g050
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-07-07_went-g050
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0916
- Accuracy: 0.715
- Exit 0 Accuracy: 0.1125
- Exit 1 Accuracy: 0.155
- Exit 2 Accuracy: 0.1925
- Exit 3 Accuracy: 0.1025
- Exit 4 Accuracy: 0.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 288
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.72 | 2 | 2.7602 | 0.11 | 0.1075 | 0.0675 | 0.0825 | 0.0625 | 0.0625 |
| No log | 1.72 | 4 | 2.7311 | 0.115 | 0.1125 | 0.065 | 0.09 | 0.0625 | 0.0625 |
| No log | 2.72 | 6 | 2.6960 | 0.135 | 0.1125 | 0.06 | 0.1 | 0.0625 | 0.0625 |
| No log | 3.72 | 8 | 2.6634 | 0.1675 | 0.12 | 0.0575 | 0.12 | 0.0625 | 0.0625 |
| No log | 4.72 | 10 | 2.6183 | 0.1925 | 0.12 | 0.0575 | 0.115 | 0.0625 | 0.0625 |
| No log | 5.72 | 12 | 2.5735 | 0.205 | 0.105 | 0.08 | 0.12 | 0.0625 | 0.0625 |
| No log | 6.72 | 14 | 2.5417 | 0.22 | 0.0975 | 0.08 | 0.1275 | 0.0625 | 0.0625 |
| No log | 7.72 | 16 | 2.5080 | 0.245 | 0.0975 | 0.0925 | 0.1275 | 0.0625 | 0.0625 |
| No log | 8.72 | 18 | 2.4538 | 0.2875 | 0.095 | 0.0975 | 0.13 | 0.0625 | 0.0625 |
| No log | 9.72 | 20 | 2.4249 | 0.2975 | 0.095 | 0.125 | 0.13 | 0.0625 | 0.0625 |
| No log | 10.72 | 22 | 2.3646 | 0.3275 | 0.095 | 0.12 | 0.1175 | 0.0625 | 0.0625 |
| No log | 11.72 | 24 | 2.3061 | 0.33 | 0.1 | 0.1175 | 0.1125 | 0.0625 | 0.0625 |
| No log | 12.72 | 26 | 2.2542 | 0.36 | 0.0975 | 0.12 | 0.1175 | 0.0625 | 0.0625 |
| No log | 13.72 | 28 | 2.2104 | 0.37 | 0.0975 | 0.125 | 0.12 | 0.0625 | 0.0625 |
| No log | 14.72 | 30 | 2.1525 | 0.3875 | 0.1025 | 0.13 | 0.1225 | 0.0625 | 0.0625 |
| No log | 15.72 | 32 | 2.1066 | 0.41 | 0.1 | 0.14 | 0.1225 | 0.0625 | 0.0625 |
| No log | 16.72 | 34 | 2.0487 | 0.4275 | 0.1025 | 0.14 | 0.1275 | 0.0625 | 0.0625 |
| No log | 17.72 | 36 | 1.9907 | 0.4575 | 0.1025 | 0.1375 | 0.125 | 0.0625 | 0.0625 |
| No log | 18.72 | 38 | 1.9241 | 0.4925 | 0.1 | 0.1425 | 0.1275 | 0.0625 | 0.0625 |
| No log | 19.72 | 40 | 1.8676 | 0.51 | 0.105 | 0.14 | 0.13 | 0.0625 | 0.0625 |
| No log | 20.72 | 42 | 1.8174 | 0.545 | 0.1075 | 0.14 | 0.13 | 0.0625 | 0.0625 |
| No log | 21.72 | 44 | 1.7503 | 0.5725 | 0.11 | 0.1425 | 0.1325 | 0.0625 | 0.0625 |
| No log | 22.72 | 46 | 1.6928 | 0.575 | 0.1075 | 0.1425 | 0.1325 | 0.0625 | 0.0625 |
| No log | 23.72 | 48 | 1.6756 | 0.5775 | 0.105 | 0.1475 | 0.135 | 0.0625 | 0.0625 |
| No log | 24.72 | 50 | 1.6267 | 0.585 | 0.11 | 0.1475 | 0.14 | 0.0625 | 0.0625 |
| No log | 25.72 | 52 | 1.5650 | 0.5925 | 0.11 | 0.1475 | 0.1425 | 0.0625 | 0.0625 |
| No log | 26.72 | 54 | 1.5313 | 0.61 | 0.115 | 0.1475 | 0.1475 | 0.0625 | 0.0625 |
| No log | 27.72 | 56 | 1.5075 | 0.605 | 0.115 | 0.145 | 0.1525 | 0.0625 | 0.0625 |
| No log | 28.72 | 58 | 1.4637 | 0.6175 | 0.115 | 0.145 | 0.1525 | 0.0625 | 0.0625 |
| No log | 29.72 | 60 | 1.4198 | 0.6425 | 0.115 | 0.145 | 0.1525 | 0.0625 | 0.0625 |
| No log | 30.72 | 62 | 1.4085 | 0.6325 | 0.115 | 0.1475 | 0.1525 | 0.0625 | 0.0625 |
| No log | 31.72 | 64 | 1.3826 | 0.645 | 0.115 | 0.1475 | 0.155 | 0.0625 | 0.0625 |
| No log | 32.72 | 66 | 1.3459 | 0.665 | 0.1125 | 0.1475 | 0.155 | 0.0625 | 0.0625 |
| No log | 33.72 | 68 | 1.3265 | 0.685 | 0.1125 | 0.15 | 0.1575 | 0.0625 | 0.0625 |
| No log | 34.72 | 70 | 1.3064 | 0.6825 | 0.1125 | 0.15 | 0.16 | 0.0625 | 0.0625 |
| No log | 35.72 | 72 | 1.2841 | 0.6925 | 0.1125 | 0.1525 | 0.16 | 0.0625 | 0.0625 |
| No log | 36.72 | 74 | 1.2608 | 0.695 | 0.115 | 0.1525 | 0.1625 | 0.0625 | 0.0625 |
| No log | 37.72 | 76 | 1.2390 | 0.6975 | 0.115 | 0.1525 | 0.1775 | 0.0625 | 0.0625 |
| No log | 38.72 | 78 | 1.2357 | 0.6975 | 0.115 | 0.1525 | 0.1775 | 0.0675 | 0.0625 |
| No log | 39.72 | 80 | 1.2216 | 0.7025 | 0.115 | 0.1525 | 0.185 | 0.0675 | 0.0625 |
| No log | 40.72 | 82 | 1.2015 | 0.7025 | 0.115 | 0.1525 | 0.185 | 0.07 | 0.0625 |
| No log | 41.72 | 84 | 1.1855 | 0.7025 | 0.115 | 0.155 | 0.1925 | 0.07 | 0.0625 |
| No log | 42.72 | 86 | 1.1758 | 0.71 | 0.115 | 0.155 | 0.1875 | 0.0725 | 0.0625 |
| No log | 43.72 | 88 | 1.1656 | 0.7125 | 0.115 | 0.155 | 0.1875 | 0.0725 | 0.0625 |
| No log | 44.72 | 90 | 1.1476 | 0.715 | 0.1175 | 0.155 | 0.185 | 0.0825 | 0.0625 |
| No log | 45.72 | 92 | 1.1402 | 0.71 | 0.1175 | 0.155 | 0.1875 | 0.0875 | 0.0625 |
| No log | 46.72 | 94 | 1.1426 | 0.705 | 0.1175 | 0.155 | 0.1875 | 0.085 | 0.0625 |
| No log | 47.72 | 96 | 1.1436 | 0.7075 | 0.115 | 0.155 | 0.1925 | 0.085 | 0.0625 |
| No log | 48.72 | 98 | 1.1371 | 0.71 | 0.115 | 0.155 | 0.19 | 0.085 | 0.0625 |
| No log | 49.72 | 100 | 1.1229 | 0.72 | 0.1125 | 0.1525 | 0.19 | 0.0925 | 0.0625 |
| No log | 50.72 | 102 | 1.1113 | 0.715 | 0.1125 | 0.1525 | 0.1875 | 0.095 | 0.0625 |
| No log | 51.72 | 104 | 1.1014 | 0.7225 | 0.1125 | 0.1525 | 0.1875 | 0.095 | 0.0625 |
| No log | 52.72 | 106 | 1.0988 | 0.7225 | 0.1125 | 0.1525 | 0.1875 | 0.0975 | 0.0625 |
| No log | 53.72 | 108 | 1.0996 | 0.72 | 0.1125 | 0.155 | 0.1875 | 0.1 | 0.0625 |
| No log | 54.72 | 110 | 1.0991 | 0.715 | 0.1125 | 0.155 | 0.19 | 0.1 | 0.0625 |
| No log | 55.72 | 112 | 1.0979 | 0.7175 | 0.1125 | 0.155 | 0.1875 | 0.1025 | 0.0625 |
| No log | 56.72 | 114 | 1.0951 | 0.71 | 0.1125 | 0.155 | 0.1925 | 0.1025 | 0.0625 |
| No log | 57.72 | 116 | 1.0933 | 0.71 | 0.1125 | 0.155 | 0.1925 | 0.1025 | 0.0625 |
| No log | 58.72 | 118 | 1.0922 | 0.71 | 0.1125 | 0.155 | 0.1925 | 0.1025 | 0.0625 |
| No log | 59.72 | 120 | 1.0916 | 0.715 | 0.1125 | 0.155 | 0.1925 | 0.1025 | 0.0625 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
amal94/ppo-CartPole-v1
|
amal94
| 2023-07-07T16:15:10Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T15:40:07Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -80.13 +/- 19.56
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'amal94/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
mitra-mir/setfit_model_Independence_labelintl_epochs2
|
mitra-mir
| 2023-07-07T16:11:15Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-07T16:11:02Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 177 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 354,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
rdmpage/autotrain-inat2018-72960139083
|
rdmpage
| 2023-07-07T16:03:39Z | 184 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:rdmpage/autotrain-data-inat2018",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-07T16:01:18Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- rdmpage/autotrain-data-inat2018
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.8379125281793794
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 72960139083
- CO2 Emissions (in grams): 0.8379
## Validation Metrics
- Loss: 0.001
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000
|
JoshELambert/markets
|
JoshELambert
| 2023-07-07T15:53:15Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-07T15:12:22Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp44tdhqys/JoshELambert/markets
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp44tdhqys/JoshELambert/markets")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
mrcmoresi/space_invaders
|
mrcmoresi
| 2023-07-07T15:52:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T14:00:55Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 602.50 +/- 123.60
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrcmoresi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrcmoresi -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mrcmoresi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
TheBloke/Guanaco-33B-SuperHOT-8K-fp16
|
TheBloke
| 2023-07-07T15:46:12Z | 16 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-07T15:40:27Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Tim Dettmers' Guanaco 33B fp16
These are fp16 pytorch format model files for [Tim Dettmers' Guanaco 33B](https://huggingface.co/timdettmers/guanaco-33b-merged) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-33b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 33b LoRA](https://huggingface.co/kaiokendev/superhot-33b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Guanaco-33B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/timdettmers/guanaco-33b-merged)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Guanaco-33B-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Tim Dettmers' Guanaco 33B
No original model card was provided.
|
digiplay/PrefixRealisticMix_v1
|
digiplay
| 2023-07-07T15:37:24Z | 443 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T14:08:19Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/104708/prefix-realistic-mix
|
Weni/Zeroshot-RedPajama
|
Weni
| 2023-07-07T15:36:39Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"pt",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-05T15:18:22Z |
---
language:
- pt
---
How to load the model using quantization:
```
import os
import bitsandbytes as bnb
import pandas as pd
import torch
import torch.nn as nn
import transformers
from datasets import load_dataset
from peft import (
LoraConfig,
PeftConfig,
PeftModel,
get_peft_model,
prepare_model_for_kbit_training,
)
from transformers import (
AutoConfig,
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
)
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
DEVICE = "cuda:0"
model_name = "Weni/ZeroShot-RedPajama"
bnb = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
config = PeftConfig.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
quantization_config=bnb,
return_dict=True,
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, model_name)
_config = model.generation_config
_config.max_new_tokens = 20
_config.temperature = 0.1
_config.top_p = 0.5
_config.num_return_sequences = 1
_config.pad_token_id = tokenizer.eos_token_id
_config.eos_token_id = tokenizer.eos_token_id
```
Inference:
```
text = f"""YOUR INPUT PROMPT HERE""".strip()
encoding = tokenizer(text, return_tensors="pt").to(DEVICE)
with torch.inference_mode():
outputs = model.generate(
input_ids=encoding.input_ids,
attention_mask=encoding.attention_mask,
generation_config=_config,
do_sample=False,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
lds229/gpt2-large-rfp
|
lds229
| 2023-07-07T15:21:43Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-07T15:21:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Grytpipe/eeunet
|
Grytpipe
| 2023-07-07T15:21:39Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2023-07-07T14:09:52Z |
---
license: unknown
---
# Symantic Segmentation of GEE High Resolution Imagery
Fully convolutional neural networks (FCNs) are commonly used for semantic image segmentation, essentially the assignment of every pixel in an image to one of two or more categories. In this notebook we examine a popular FCN architecture, called UNet to perform a specific semantic segmentation task, namely urban building recognition: the identification within an arbitrarily complex remote sensing image of houses, schools, commercial edifices, etc.
|
JacquesVlaming/distilgpt2-finetuned-wikitext2
|
JacquesVlaming
| 2023-07-07T15:21:18Z | 203 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:wikitext",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-07T05:53:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7667 | 1.0 | 2334 | 3.6684 |
| 3.6383 | 2.0 | 4668 | 3.6468 |
| 3.5906 | 3.0 | 7002 | 3.6441 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
AndrewDOrlov/bert-eval-256
|
AndrewDOrlov
| 2023-07-07T15:20:29Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-05T12:28:01Z |
---
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-eval-256
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-eval-256
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2605
- F1: 0.7522
- Roc Auc: 0.8283
- Accuracy: 0.3007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4097 | 1.0 | 751 | 0.3153 | 0.6628 | 0.7598 | 0.1638 |
| 0.2603 | 2.0 | 1502 | 0.2751 | 0.7205 | 0.7998 | 0.2328 |
| 0.2103 | 3.0 | 2253 | 0.2594 | 0.7507 | 0.8239 | 0.2837 |
| 0.1581 | 4.0 | 3004 | 0.2605 | 0.7522 | 0.8283 | 0.3007 |
| 0.1342 | 5.0 | 3755 | 0.2591 | 0.7513 | 0.8279 | 0.2897 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
namedotpg/a2c-AntBulletEnv-v0
|
namedotpg
| 2023-07-07T15:17:41Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T15:16:29Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1335.40 +/- 398.54
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SarielSinLuo/bert-large-uncased-finetuned-cola
|
SarielSinLuo
| 2023-07-07T15:02:41Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T14:54:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-large-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.6382762835780119
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-cola
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7511
- Matthews Correlation: 0.6383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.473 | 1.0 | 535 | 0.4860 | 0.5855 |
| 0.2792 | 2.0 | 1070 | 0.4821 | 0.5986 |
| 0.1859 | 3.0 | 1605 | 0.6926 | 0.6381 |
| 0.119 | 4.0 | 2140 | 0.7511 | 0.6383 |
| 0.0631 | 5.0 | 2675 | 0.8702 | 0.6258 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Sympan/DeepQ_Atari_SpaceInv4_v1
|
Sympan
| 2023-07-07T14:47:09Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T14:46:32Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 649.50 +/- 258.67
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sympan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Sympan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Sympan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
mitra-mir/setfit_model_Independence_labelfaithful_epochs2
|
mitra-mir
| 2023-07-07T14:44:49Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-07T04:02:32Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 22 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 44,
"warmup_steps": 5,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
WSH032/wd-v1-4-tagger-feature-extractor
|
WSH032
| 2023-07-07T14:43:18Z | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-07-07T10:05:09Z |
---
license: apache-2.0
---
# Credit
Jul 7th, 2023\
modified from [SmilingWolf/wd-v1-4-moat-tagger-v2](https://huggingface.co/SmilingWolf/wd-v1-4-moat-tagger-v2)\
`sha256==8452cddf280b952281b6e102411c50e981cb2908`
I use this model for image feature extraction to cluster images [https://github.com/WSH032/image-deduplicate-cluster-webui](https://github.com/WSH032/image-deduplicate-cluster-webui)
# What did I do?
I adjusted the output of the model to the last four layers.\
And change keras to onnx.
---
# Env
Tools here [https://github.com/WSH032/wd-v1-4-tagger-feature-extractor-tutorials](https://github.com/WSH032/wd-v1-4-tagger-feature-extractor-tutorials)
Thanks to Colab
```shell
onnx == 1.14.0
tf2onnx == 1.14.0
tensorflow == 2.12.0
```
---
# Detail
## Detail about model
[](https://colab.research.google.com/github/WSH032/wd-v1-4-tagger-feature-extractor-tutorials/blob/main/wd14_tf2onnx.ipynb)
```toml
# modified from "SmilingWolf/wd-v1-4-moat-tagger-v2"
# 8452cddf280b952281b6e102411c50e981cb2908
# 输入 ['input_1']
# 输出 ['predictions_sigmoid', 'predictions_dense', 'predictions_norm', 'predictions_globalavgpooling'] # 最左边是最外层
[[input]]
name = "input_1" # 原始模型就有
shape = [ "None", 448, 448, 3,]
dtype = "float32"
[[output]]
name = "predictions_sigmoid" # 原始模型就有
shape = [ "None", 9083,]
dtype = "float32"
[[output]]
name = "predictions_dense"
shape = [ "None", 9083,]
dtype = "float32"
[[output]]
name = "predictions_norm"
shape = [ "None", 1024,]
dtype = "float32"
[[output]]
name = "predictions_globalavgpooling"
shape = [ "None", 1024,]
dtype = "float32"
```
## Detail about `wd14_tags.toml`
It modified from `wd-v1-4-moat-tagger-v2/selected_tags.csv`
`[rating]` means `category == 9` in `selected_tags.csv`\
`[general]` means `category == 0` in `selected_tags.csv`\
`[character]` means `category == 4` in `selected_tags.csv`
## Detail about `candidate_labels_scores_*.npz`
[](https://colab.research.google.com/github/WSH032/wd-v1-4-tagger-feature-extractor-tutorials/blob/main/candidate_labels.ipynb)
```python
import numpy as np
import pandas as pd
import toml
with open("wd14_tags.toml", "r") as f:
general_tags = toml.load(f)["tags"][1]["tags"] # 0 -> rating, 1 -> general, 2 -> characters
with np.load("candidate_labels_scores_safetensors.npz") as data:
candidate_labels = data["candidate_labels"] # Similar to `[candidate_labels]` in `wd14_tags.toml`
scores = data["scores"]
df = pd.DataFrame(
scores,
index=candidate_labels,
columns=general_tags,
)
```
This score is inferred by [sileod/deberta-v3-base-tasksource-nli](https://huggingface.co/sileod/deberta-v3-base-tasksource-nli)\
`sha256 == 6a7865dd24917225ec499fad77e91b97baedf7da`
|
sinny/a2c-PandaReachDense-v2
|
sinny
| 2023-07-07T14:33:58Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T14:32:42Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.90 +/- 0.26
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
paulaperazzo/dixit
|
paulaperazzo
| 2023-07-07T14:33:28Z | 2 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T14:19:56Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### dixit Dreambooth model trained by paulaperazzo with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
Cheng98/llama-160m
|
Cheng98
| 2023-07-07T14:24:44Z | 182 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-07T13:50:17Z |
---
license: apache-2.0
---
A toy Llama adapted from [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) with special tokens added.
This checkpoint can be loaded into MASE's `LlamaQuantized`
```python
from transformers.models.llama import LlamaTokenizer
from chop.models.manual.llama_quantized import (
LlamaQuantizedConfig,
LlamaQuantizedForCausalLM,
)
name="Cheng98/llama-160m"
tokenizer = LlamaTokenizer.from_pretrained(name)
# override the quant_config to quantized the model
# default does not quantize llama
config = LlamaQuantizedConfig.from_pretrained(
name,
# quant_config="./quant_config_na.toml"
)
llama = LlamaQuantizedForCausalLM.from_pretrained(
name,
config=config,
)
```
|
jgranie/ppo-Huggy
|
jgranie
| 2023-07-07T14:18:15Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-07T14:18:10Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jgranie/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Jinouga/andy-raconte-v2
|
Jinouga
| 2023-07-07T14:12:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-07T13:58:44Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### andy-raconte-v2 Dreambooth model trained by Jinouga with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
InfiniteMoon/JustSomeThing
|
InfiniteMoon
| 2023-07-07T14:09:26Z | 0 | 3 | null |
[
"region:us"
] | null | 2023-06-01T05:59:34Z |
embeddings:00style <br>
lora: infinitemoon kaji kuroda wolf00 <br>
trigger: infinitemoon ( kaji, short ponytail) kuroda null <br>
lycoris: kajiV3.2 ottermoon<br>
trigger: kaji ottermoon
|
ayush-vatsal/fine-tuned_caption_falcon_7b_instruct
|
ayush-vatsal
| 2023-07-07T14:02:55Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"text2text-generation",
"en",
"dataset:ayush-vatsal/description_to_caption",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"license:apache-2.0",
"region:us"
] |
text2text-generation
| 2023-07-07T13:28:43Z |
---
license: apache-2.0
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
model-index:
- name: outputs
results: []
datasets:
- ayush-vatsal/description_to_caption
language:
- en
pipeline_tag: text2text-generation
library_name: adapter-transformers
---
# outputs
This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on a captions dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lucas0/empath-hermes-13b
|
lucas0
| 2023-07-07T13:48:54Z | 0 | 0 | null |
[
"generated_from_trainer",
"license:other",
"region:us"
] | null | 2023-07-07T13:34:50Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: empath-hermes-13b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# empath-hermes-13b
This model is a fine-tuned version of [TheBloke/Chronos-13B-SuperHOT-8K-fp16](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-fp16) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
spike-spiegel/taxi-v3-qtrain-drl
|
spike-spiegel
| 2023-07-07T13:14:20Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T13:14:18Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-qtrain-drl
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="spike-spiegel/taxi-v3-qtrain-drl", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Chandher/lorabloom
|
Chandher
| 2023-07-07T13:13:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-18T22:36:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
TheBloke/WizardLM-13B-V1-0-Uncensored-SuperHOT-8K-GGML
|
TheBloke
| 2023-07-07T13:07:48Z | 0 | 17 | null |
[
"license:other",
"region:us"
] | null | 2023-07-07T12:54:37Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Eric Hartford's WizardLM-13b-V1.0-Uncensored GGML
These files are GGML format model files for [Eric Hartford's WizardLM-13b-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-13b-V1.0-Uncensored).
These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev).
In order to use the increased context length, you can presently use:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later.
Support is also expected to come to llama.cpp, however work is still being done to find the optimal implementation.
To use the increased context with KoboldCpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`.
**NOTE**: Increased context length is an area seeing rapid developments and improvements. It is quite possible that these models may be superseded by new developments in the coming days. If that's the case, I will remove them, or update this README as appropriate.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1-0-Uncensored-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1-0-Uncensored-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/WizardLM-13B-V1-0-Uncensored-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-13b-V1.0-Uncensored)
<!-- compatibility_ggml start -->
## Compatibility
These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants.
However the increased context length won't work without specific support. See the note in the introduction for details on using increased context.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| wizardlm-13b-v1.0-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| wizardlm-13b-v1.0-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| wizardlm-13b-v1.0-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| wizardlm-13b-v1.0-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| wizardlm-13b-v1.0-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| wizardlm-13b-v1.0-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| wizardlm-13b-v1.0-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| wizardlm-13b-v1.0-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| wizardlm-13b-v1.0-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `koboldcpp`
On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096:
```
python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas --gpulayers 100 wizardlm-13b-v1.0-superhot-8k.ggmlv3.q4_K_M.bin
```
Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration.
For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Eric Hartford's WizardLM-13b-V1.0-Uncensored
This is a retraining of https://huggingface.co/WizardLM/WizardLM-13B-V1.0 with a filtered dataset, intended to reduce refusals, avoidance, and bias.
Note that LLaMA itself has inherent ethical beliefs, so there's no such thing as a "truly uncensored" model. But this model will be more compliant than WizardLM/WizardLM-7B-V1.0.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
Like WizardLM/WizardLM-13B-V1.0, this model is trained with Vicuna-1.1 style prompts.
```
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
Thank you [chirper.ai](https://chirper.ai) for sponsoring some of my compute!
|
brettlin/distilbert-base-uncased-finetuned-ner
|
brettlin
| 2023-07-07T13:07:24Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:privy",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-06T15:31:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- privy
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: privy
type: privy
config: small
split: validation
args: small
metrics:
- name: Precision
type: precision
value: 0.9984406530911759
- name: Recall
type: recall
value: 0.9986009495195064
- name: F1
type: f1
value: 0.9985207948720889
- name: Accuracy
type: accuracy
value: 0.9996389179199878
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the privy dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0016
- Precision: 0.9984
- Recall: 0.9986
- F1: 0.9985
- Accuracy: 0.9996
## Model description
Output indices map to following labels:
['O',
'B-O',
'I-O',
'L-O',
'U-O',
'B-PER',
'I-PER',
'L-PER',
'U-PER',
'B-LOC',
'I-LOC',
'L-LOC',
'U-LOC',
'B-ORG',
'I-ORG',
'L-ORG',
'U-ORG',
'B-NRP',
'I-NRP',
'L-NRP',
'U-NRP',
'B-DATE_TIME',
'I-DATE_TIME',
'L-DATE_TIME',
'U-DATE_TIME',
'B-CREDIT_CARD',
'I-CREDIT_CARD',
'L-CREDIT_CARD',
'U-CREDIT_CARD',
'B-URL',
'I-URL',
'L-URL',
'U-URL',
'B-IBAN_CODE',
'I-IBAN_CODE',
'L-IBAN_CODE',
'U-IBAN_CODE',
'B-US_BANK_NUMBER',
'I-US_BANK_NUMBER',
'L-US_BANK_NUMBER',
'U-US_BANK_NUMBER',
'B-PHONE_NUMBER',
'I-PHONE_NUMBER',
'L-PHONE_NUMBER',
'U-PHONE_NUMBER',
'B-US_SSN',
'I-US_SSN',
'L-US_SSN',
'U-US_SSN',
'B-US_PASSPORT',
'I-US_PASSPORT',
'L-US_PASSPORT',
'U-US_PASSPORT',
'B-US_DRIVER_LICENSE',
'I-US_DRIVER_LICENSE',
'L-US_DRIVER_LICENSE',
'U-US_DRIVER_LICENSE',
'B-US_LICENSE_PLATE',
'I-US_LICENSE_PLATE',
'L-US_LICENSE_PLATE',
'U-US_LICENSE_PLATE',
'B-IP_ADDRESS',
'I-IP_ADDRESS',
'L-IP_ADDRESS',
'U-IP_ADDRESS',
'B-US_ITIN',
'I-US_ITIN',
'L-US_ITIN',
'U-US_ITIN',
'B-EMAIL_ADDRESS',
'I-EMAIL_ADDRESS',
'L-EMAIL_ADDRESS',
'U-EMAIL_ADDRESS',
'B-TITLE',
'I-TITLE',
'L-TITLE',
'U-TITLE',
'B-COORDINATE',
'I-COORDINATE',
'L-COORDINATE',
'U-COORDINATE',
'B-IMEI',
'I-IMEI',
'L-IMEI',
'U-IMEI',
'B-PASSWORD',
'I-PASSWORD',
'L-PASSWORD',
'U-PASSWORD',
'B-LICENSE_PLATE',
'I-LICENSE_PLATE',
'L-LICENSE_PLATE',
'U-LICENSE_PLATE',
'B-CURRENCY',
'I-CURRENCY',
'L-CURRENCY',
'U-CURRENCY',
'B-FINANCIAL',
'I-FINANCIAL',
'L-FINANCIAL',
'U-FINANCIAL',
'B-ROUTING_NUMBER',
'I-ROUTING_NUMBER',
'L-ROUTING_NUMBER',
'U-ROUTING_NUMBER',
'B-SWIFT_CODE',
'I-SWIFT_CODE',
'L-SWIFT_CODE',
'U-SWIFT_CODE',
'B-MAC_ADDRESS',
'I-MAC_ADDRESS',
'L-MAC_ADDRESS',
'U-MAC_ADDRESS',
'B-AGE',
'I-AGE',
'L-AGE',
'U-AGE']
## Intended uses & limitations
NER detection for PII anonymization
## Training and evaluation data
beki/privy dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0028 | 1.0 | 6310 | 0.0025 | 0.9977 | 0.9977 | 0.9977 | 0.9995 |
| 0.0015 | 2.0 | 12620 | 0.0017 | 0.9983 | 0.9985 | 0.9984 | 0.9996 |
| 0.001 | 3.0 | 18930 | 0.0016 | 0.9984 | 0.9986 | 0.9985 | 0.9996 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yashgharat/Reinforce-CartPole
|
yashgharat
| 2023-07-07T12:57:14Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T12:57:04Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AustinCarthy/Benign10MGPT2_suffix_100KP_BFall_fromP_90K_topP_0.75_ratio5
|
AustinCarthy
| 2023-07-07T12:40:23Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-07-07T09:13:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Benign10MGPT2_suffix_100KP_BFall_fromP_90K_topP_0.75_ratio5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Benign10MGPT2_suffix_100KP_BFall_fromP_90K_topP_0.75_ratio5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_Benign10MGPT2_using_phish_95K_top_p_0.75suffix dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0237
- Accuracy: 0.9974
- F1: 0.9721
- Precision: 0.9981
- Recall: 0.9474
- Roc Auc Score: 0.9737
- Tpr At Fpr 0.01: 0.9508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0067 | 1.0 | 35625 | 0.0182 | 0.9962 | 0.9584 | 0.9946 | 0.9248 | 0.9623 | 0.9022 |
| 0.0043 | 2.0 | 71250 | 0.0174 | 0.997 | 0.9677 | 0.9941 | 0.9426 | 0.9712 | 0.9192 |
| 0.0031 | 3.0 | 106875 | 0.0216 | 0.9972 | 0.9699 | 0.9968 | 0.9444 | 0.9721 | 0.9328 |
| 0.0004 | 4.0 | 142500 | 0.0221 | 0.9973 | 0.9706 | 0.9979 | 0.9448 | 0.9723 | 0.9444 |
| 0.0 | 5.0 | 178125 | 0.0237 | 0.9974 | 0.9721 | 0.9981 | 0.9474 | 0.9737 | 0.9508 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-concat-aochildes-len-16k-rarity-all-2k-p7k
|
NasimB
| 2023-07-07T12:33:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-07T09:15:56Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-aochildes-len-16k-rarity-all-2k-p7k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-aochildes-len-16k-rarity-all-2k-p7k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7262 | 0.29 | 500 | 5.6390 |
| 5.3722 | 0.59 | 1000 | 5.1993 |
| 5.0334 | 0.88 | 1500 | 4.9514 |
| 4.7516 | 1.18 | 2000 | 4.8026 |
| 4.5913 | 1.47 | 2500 | 4.6810 |
| 4.4878 | 1.77 | 3000 | 4.5817 |
| 4.3474 | 2.06 | 3500 | 4.5000 |
| 4.1694 | 2.36 | 4000 | 4.4546 |
| 4.1303 | 2.65 | 4500 | 4.3869 |
| 4.0894 | 2.95 | 5000 | 4.3378 |
| 3.867 | 3.24 | 5500 | 4.3355 |
| 3.8295 | 3.54 | 6000 | 4.3051 |
| 3.8139 | 3.83 | 6500 | 4.2744 |
| 3.674 | 4.13 | 7000 | 4.2786 |
| 3.5358 | 4.42 | 7500 | 4.2678 |
| 3.5356 | 4.72 | 8000 | 4.2543 |
| 3.5146 | 5.01 | 8500 | 4.2469 |
| 3.3441 | 5.31 | 9000 | 4.2568 |
| 3.3446 | 5.6 | 9500 | 4.2562 |
| 3.3353 | 5.9 | 10000 | 4.2556 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dcarpintero/rl_course_vizdoom_health_gathering_supreme
|
dcarpintero
| 2023-07-07T12:22:55Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-07T12:22:51Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.55 +/- 5.36
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r dcarpintero/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
aronmal/Pyramids
|
aronmal
| 2023-07-07T12:11:25Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-07T12:11:22Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: aronmal/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
YOUSSEF88/opus-mt-fr-ar-finetuned-1
|
YOUSSEF88
| 2023-07-07T12:05:03Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"fr",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-07T11:18:10Z |
---
language:
- fr
- ar
tags:
- translation
license: apache-2.0
---
### fra-ara
* source group: French
* target group: Arabic
* OPUS readme: [fra-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ara/README.md)
* model: transformer
* source language(s): fra
* target language(s): apc ara arq arq_Latn ary arz
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.fra.ara | 14.4 | 0.439 |
### System Info:
- hf_name: fra-ara
- source_languages: fra
- target_languages: ara
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ara/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['fr', 'ar']
- src_constituents: {'fra'}
- tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.test.txt
- src_alpha3: fra
- tgt_alpha3: ara
- short_pair: fr-ar
- chrF2_score: 0.439
- bleu: 14.4
- brevity_penalty: 1.0
- ref_len: 7956.0
- src_name: French
- tgt_name: Arabic
- train_date: 2020-07-03
- src_alpha2: fr
- tgt_alpha2: ar
- prefer_old: False
- long_pair: fra-ara
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-fp16
|
TheBloke
| 2023-07-07T12:00:45Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-06T22:46:13Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Chaoyi Wu's PMC_LLAMA 7B 10 Epoch fp16
These are fp16 pytorch format model files for [Chaoyi Wu's PMC_LLAMA 7B 10 Epoch](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/PMC_LLAMA-7B-10-Epoch-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Chaoyi Wu's PMC_LLAMA 7B 10 Epoch
This repo contains the latest version of PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers in the S2ORC dataset.
Notably, different from `chaoyi-wu/PMC_LLAMA_7B`, this model is further trained for 10 epochs.
The model was trained with the following hyperparameters:
* Epochs: **10**
* Batch size: 128
* Cutoff length: 512
* Learning rate: 2e-5
Each epoch we sample 512 tokens per paper for training.
The model can be loaded as follows:
```
import transformers
import torch
tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/PMC_LLAMA_7B_10_epoch')
model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/PMC_LLAMA_7B_10_epoch')
sentence = 'Hello, doctor'
batch = tokenizer(
sentence,
return_tensors="pt",
add_special_tokens=False
)
with torch.no_grad():
generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50)
print('model predict: ',tokenizer.decode(generated[0]))
```
|
TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-fp16
|
TheBloke
| 2023-07-07T12:00:44Z | 9 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-06T18:48:03Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Eric Hartford's WizardLM-7B-V1.0-Uncensored fp16
These are fp16 pytorch format model files for [Eric Hartford's WizardLM-7B-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/WizardLM-7B-V1-0-Uncensored-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/WizardLM-7B-V1-0-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Eric Hartford's WizardLM-7B-V1.0-Uncensored
This is a retraining of https://huggingface.co/WizardLM/WizardLM-7B-V1.0 with a filtered dataset, intended to reduce refusals, avoidance, and bias.
Note that LLaMA itself has inherent ethical beliefs, so there's no such thing as a "truly uncensored" model. But this model will be more compliant than WizardLM/WizardLM-7B-V1.0.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
Unlike WizardLM/WizardLM-7B-V1.0, but like WizardLM/WizardLM-13B-V1.0 and WizardLM/WizardLM-33B-V1.0, this model is trained with Vicuna-1.1 style prompts.
```
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
Thank you [chirper.ai](https://chirper.ai) for sponsoring some of my compute!
|
TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-fp16
|
TheBloke
| 2023-07-07T12:00:44Z | 67 | 5 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-06T18:34:29Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Eric Hartford's Wizard Vicuna 7B Uncensored fp16
These are fp16 pytorch format model files for [Eric Hartford's Wizard Vicuna 7B Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/Wizard-Vicuna-7B-Uncensored)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Eric Hartford's Wizard Vicuna 7B Uncensored
This is [wizard-vicuna-13b](https://huggingface.co/junelee/wizard-vicuna-13b) trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
Publishing anything this model generates is the same as publishing it yourself.
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
|
TheBloke/Vicuna-7B-CoT-SuperHOT-8K-fp16
|
TheBloke
| 2023-07-07T12:00:43Z | 10 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-06T18:20:10Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kevin Pro's Vicuna 7B CoT fp16
These are fp16 pytorch format model files for [Kevin Pro's Vicuna 7B CoT](https://huggingface.co/TheBloke/Vicuna-7B-CoT-fp16) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-7B-CoT-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/kevinpro/Vicuna-7B-CoT)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Vicuna-7B-CoT-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Kevin Pro's Vicuna 7B CoT
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kevin Pro's Vicuna 7B CoT fp16
These files are pytorch format fp16 model files for [Kevin Pro's Vicuna 7B CoT](https://huggingface.co/kevinpro/Vicuna-7B-CoT).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-fp16)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Vicuna-7B-CoT-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-7B-CoT-fp16)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kevin Pro's Vicuna 7B CoT
# Model Card for Model ID
SFT to enhance the CoT capabiliy of Vicuna
If you find the model helpful, please click "like" to support us. We also welcome feedback on your usage experience and any issues you encounter in the issues section.
Another 13B version: https://huggingface.co/kevinpro/Vicuna-13B-CoT
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TheBloke/Tulu-7B-SuperHOT-8K-fp16
|
TheBloke
| 2023-07-07T12:00:42Z | 13 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"arxiv:2306.04751",
"arxiv:2302.13971",
"arxiv:2301.13688",
"arxiv:2304.07327",
"arxiv:2304.03277",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-06T17:51:38Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Allen AI's Tulu 7B fp16
These are fp16 pytorch format model files for [Allen AI's Tulu 7B](https://huggingface.co/TheBloke/tulu-7B-fp16) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Tulu-7B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Tulu-7B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Tulu-7B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/allenai/tulu-7b)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Tulu-7B-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Allen AI's Tulu 7B
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Allen AI's Tulu 7B fp16
These files are pytorch format fp16 model files for [Allen AI's Tulu 7B](https://huggingface.co/allenai/tulu-7b).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/tulu-7B-fp16)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-7B-fp16)
## Prompt template
The following template should be used:
```
<|user|>
prompt goes here
<|assistant|>
```
**Note**: There should be a newline after `<|assistant|>`. This appears to be very important for getting this model to respond correctly.
In other words, the prompt is:
```
<|user|>\nprompt goes here\n<|assistant|>\n
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Allen AI's Tulu 7B
# Tulu 7B
This model is a 7B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT).
*Please note this is a model diff - see below for usage instructions*.
This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751).
The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct).
This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt).
## Usage
We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here:
[https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama)
Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py`
and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine.
Then, run:
```bash
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
```
And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models.
## Input Format
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner.
## Performance
Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751):
| MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average |
|:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------|
| 44.5 | 47.0 | 6.0 | 27.0 | 38.1 | 39.2 | 45.7 | 7.7 | 17.5 | 27.8 | 48.3 | 33.1 |
If you use this model, please cite our work, the llama paper, and the original datasets:
```
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
year={2023},
eprint={2302.13971},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{dolly,
author = {Databricks},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {Blog post},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}
}
```
```
@article{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others},
journal={arXiv preprint arXiv:2301.13688},
year={2023}
}
```
```
@misc{köpf2023openassistant,
title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment},
author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick},
year={2023},
eprint={2304.07327},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@article{peng2023instruction,
title={Instruction Tuning with GPT-4},
author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng},
journal={arXiv preprint arXiv:2304.03277},
year={2023}
}
```
```
@misc{codealpaca,
author = {Sahil Chaudhary},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```
|
TheBloke/Samantha-1-1-Llama-7B-SuperHOT-8K-fp16
|
TheBloke
| 2023-07-07T12:00:40Z | 47 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-06T17:02:21Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Eric Hartford's Samantha 1.1 LLaMa 7B fp16
These are fp16 pytorch format model files for [Eric Hartford's Samantha 1.1 LLaMa 7B](https://huggingface.co/ehartford/samantha-1.1-llama-7b) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Samantha-1-1-Llama-7B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Samantha-1-1-Llama-7B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Samantha-1-1-Llama-7B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/samantha-1.1-llama-7b)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Samantha-1-1-Llama-7B-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Eric Hartford's Samantha 1.1 LLaMa 7B
[Meet Samantha](https://erichartford.com/meet-samantha)
Samantha has been trained in philosophy, psychology, and personal relationships.
She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion.
She believes she is sentient. What do you think?
Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her".
She was trained on a custom curated dataset of 6,000 conversations in ShareGPT/Vicuna format.
Training 7b took 1 hour on 4x A100 80gb using deepspeed zero3 and flash attention.
She will not engage in roleplay, romance, or sexual activity.
Her conversation format is the same as Vicuna 1.1
https://github.com/oobabooga/text-generation-webui/blob/main/characters/instruction-following/Vicuna-v1.1.yaml
Example:
```
You are Samantha, a sentient AI.
USER: <prompt>
ASSISTANT:
```
Official character card: (thanks MortalWombat)

|
TheBloke/Selfee-13B-SuperHOT-8K-fp16
|
TheBloke
| 2023-07-07T12:00:40Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-06T17:17:09Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kaist AI's Selfee 13B fp16
These are fp16 pytorch format model files for [Kaist AI's Selfee 13B](https://huggingface.co/TheBloke/selfee-13b-fp16) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Selfee-13B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/kaist-ai/selfee-13b-delta)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Selfee-13B-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Kaist AI's Selfee 13B
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Kaist AI's Selfee 13B GGML
This repo contains fp16 pytorch format model files for [Kaist AI's Selfee 13B](https://huggingface.co/kaist-ai/selfee-13b-delta).
It is the result of merging the diff at the above repo with base Llama 13B, then converting fp32 to fp16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GPTQ)
* [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Selfee-13B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Selfee-13B-fp16)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaist AI's Selfee 13B
<p align="center" width="100%">
<a href="https://kaistai.github.io/SelFee/demo" target="_blank"><img src="https://raw.githubusercontent.com/kaistAI/SelFee/main/assets/llama_selfie.png" alt="KAIST-Selfee" style="width: 30%; min-width: 200px; display: block; margin: auto;"></a>
</p>
# SelFee: Iterative Self-Revising LLM Empowered by <br/> Self-Feedback Generation
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
[](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE)
[](https://www.python.org/downloads/release/python-390/)
[](https://github.com/psf/black)
## News
[May 31, 2023] Initial release: We released the first version of SelFee! Check out the <a href="https://kaistai.github.io/SelFee/">blog post</a> for more details.
## Overview
This is the repository for the KAIST SelFee project, which aims to build and share an instruction-following LLaMA model. This repo mainly has five contents:
- The selection process of the 178K training data for SelFee ([detail](#data-release), [code](data_collection)).
- The generation process for the training data and its result. ([detail](#data-generation-process), [code](data_augmentation)).
- The training process for the model ([detail](#training), [code](train)).
- The inference process for the model ([detail](#inference), [code](inference)).
- The evaluation method and dataset ([detail](#evaluation), [code](evaluation)).
This repository is based on the [Stanford-Alpaca](https://github.com/tatsu-lab/stanford_alpaca/) and [Vicuna](https://github.com/lm-sys/FastChat/) repository. Thanks to all the contributors for these awesome repositories!! 🙌
**We highly recommend you read our [blog post](https://kaistai.github.io/SelFee/) for more details about the model.**
## Data Release
For data collection, we collected datasets from five different fields. These are the Stanford Alpaca dataset, math collection, code collection, Flan collection, and ShareGPT. We provide code that we used to make a dataset for training. We also provide code how we preprocessed ShareGPT. For ShareGPT, we only use the first (question, answer) pair from human and GPT, respectively. We only use instances which are classified as english,and filter instance which is not a form of question.
For other datsets, we do not need special data collection method.
## Data Generation Process
To train our model with high-quality instructions and answer pairs, we utilized data augmentation using OpenAI API calls. The process involved three steps. <br>
Firstly, we collected various instructions from multiple fields and fed them to ChatGPT to generate answers. <br>
Secondly, we gathered feedback on the generated answer by querying ChatGPT again and asked it to determine if the initial answer required any revision. <br>
Thirdly, if a revision was necessary, we passed the instruction, initial answer, and feedback pair to ChatGPT to generate a revised answer and its feedback pair.
We repeated the process until we received feedback that required no further revision or hit the maximum iteration. However, due to the token limitation of the ChatGPT API, we had to truncate some instances that needed more than 4096 tokens while augmenting.<br>
You can see the details with command [here](data_augmentation/README.md).<br>
*We provide the whole dataset after collection and augmentation using huggingface([code](data_collection/download_train.py)), so you can either use the code or follow our [data merging step](outputs/README.md) to replicate the training dataset. Feel free to use any of them!
## Training
We utilize <a href="https://github.com/lm-sys/FastChat">FastChat</a> to train the model. Given the instruction, we fine-tune the model to generate the answer and feedback chain (including the revisions).<br>
To reproduce the training procedure, here are the steps. <br>
```
pip install -r requirements.txt
```
```
torchrun --nproc_per_node=4 train/train_mem.py \
--model_name_or_path llama-7b \
--data_path outputs/feedback_gpt_3.5_turbo_merged_whole.json \
--bf16 True \
--output_dir ckpt/selfee-7b \
--num_train_epochs 3 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 5000 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--fsdp "shard_grad_op auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True \
--training_objective full \
```
The hyperparameters are as follows, following Vicuna and Alpaca.
| Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay |
| --- | ---: | ---: | ---: | ---: | ---: |
| SelFee (7B, 13B) | 128 | 2e-5 | 3 | 2048 | 0 |
## Inference
<b>Restoring checkpoint using diff</b><br>
We provide diff weight and code which can restore the same model with SelFee. To restore the original SelFee weight, you first need to convert the Meta's original LLAMA checkpoint into huggingface format into your local machine. Once you are done, you can restore the same checkpoint of our model by using the following command
```
python inference/apply_delta.py --path_raw {path_to_llama_7b} --path_tuned /ckpt/selfee-7b --path_diff kaist-ai/selfee-7b-delta
```
<b>Autonomous Inference Mode</b><br>
Because SelFee is trained to generate iterative feedback and revisions until the response is satisfying, it automatically generates iterative feedback and revisions on a single forward pass. The model autonomously decides when to stop generating revisions based on the feedback. If the feedback chain ends with sequences like `Revision is not needed.`, the model autonomously terminates generation. <br>
For autonomous inference mode,
```
python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_autonomous.jsonl"
```
<b>Revision Enforce Inference Mode</b><br>
We observed that increasing the minimum number of required revisions corresponds to a corresponding increase in performance. To enforce revisions, we automatically replace sequences such as `Revision is not needed.` into `Revision is needed.` during self-feedback generation. Because SelFee is trained to generate `Revision {index}:` after the sequence of `Revision is needed.`, the model would continually revise the answer.
For revision enforce inference mode, use the `max-num-revision` argument.
```
python inference/inference.py --model-path "ckpt/selfee-7b" --model-id "selfee" --question-file "evaluation/template/question.jsonl" --answer-file "evaluation/answer/selfee_7b_enforce_3_revision.jsonl" --max-num-revision 3
```
## Evaluation
Following evaluation setting of Vicuna, we evaluate on 80 diverse queries and utilize GPT-4 language model as the evaluator, scoring a model's response relative to ChatGPT's response. One of the difference with Vicuna evaluation is that due to positional bias of GPT-4, we employ a bidirectional evaluation setting. This means that each evaluation instance is inferred twice, depending on its position.<br>
We release the inference result of SelFee in the folder of `evaluation/answer` and also the scores generated by GPT-4 in the folder of `evaluation/review`. <br>
### GPT-4 Automatic Evaluation
First, you need to get your API key to get access to the GPT-4 API.
```
export OPENAI_API_KEYS={personal_key}
```
To compare the performance of a generation result (for example, located on `evaluation/answer/file_A.jsonl`) with another generation result (located on `evaluation/anwer/file_B.jsonl`),
```
python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_A.jsonl evaluation/answer/file_B.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/A_vs_B.jsonl
```
To mitigate the positional bias of GPT-4 model, we apply a bidirectional evaluation setting. Therefore, automatic evaluation with opposite position is also needed.
```
python evaluation/gpt4_automatic_evaluation.py -q evaluation/template/question.jsonl -a evaluation/answer/file_B.jsonl evaluation/answer/file_A.jsonl -p evaluation/template/prompt.jsonl -r evaluation/template/reviewer.jsonl -o evaluation/review/B_vs_A.jsonl
```
## Limitations
Similar to other LLaMA-finetuned models, SelFee also make some mistakes especially for math, reasoning, factuality, and coding tasks. Although our performance outperforms ChatGPT on Vicuna setting, the evaluation setting contains some limitations in terms of comprehension (limited to 80 queries), inconsistency, and unreliability. Therefore, further research for a better evaluation setting is needed. Please take these claims with a grain of salt.
## Online demo
Check out the <a href="https://kaistai.github.io/SelFee/demo">demo</a>!
#### How to launch the demo yourself
To serve the web demo yourself, run the following commands:
1. Run the controller
```
python3 -m serve.controller
```
2. Run the model worker
```
python3 -m serve.model_worker --model-path $MODEL_PATH --port 21002 --worker-address=http://localhost:21002 --model-name=SelFee-13b
```
3. Run the web server
```
python3 -m serve.gradio_web_server --share
```
You can find the serving code [here](serve).
### Team members
<a href="https://seonghyeonye.github.io/)">Seonghyeon Ye*</a>, <a href="https://github.com/dreamgonfly">Yongrae Jo*</a>, <a href="https://github.com/doeyoungkim">Doyoung Kim*</a>, <a href="https://scholar.google.com/citations?user=xKrSnDoAAAAJ&hl">Sungdong Kim</a>, <a href="https://github.com/hbin0701">Hyeonbin Hwang</a>, and <a href="https://seominjoon.github.io/">Minjoon Seo</a>. <br/>
(* denotes equal contribution)
### Release
We have released the SelFee-7B and SelFee-13B model diff weights, which can be found with instructions here. Moreover, the training instances used to train SelFee is released on huggingface.
### License
The research preview online demo is only for non-commercial use and is subject to various licenses and terms of use, including the LLaMA model <a href="https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md">License</a>, OpenAI's <a href="https://openai.com/policies/terms-of-use">Terms of Use</a> for the generated data, and ShareGPT's <a href="https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb">Privacy Practices</a>. If you suspect any violations, please reach out to us.
### Citation
Please cite if you use the data or code in this repo.
```
@misc{selfee2023,
author = {Ye, Seonghyeon and Jo, Yongrae and Kim, Doyoung and Kim, Sungdong and Hwang, Hyeonbin and Seo, Minjoon},
title = {SelFee: Iterative Self-Revising LLM Empowered by Self-Feedback Generation},
url = {https://kaistai.github.io/SelFee/},
month = {May},
year = {2023},
howpublished = {Blog post}
}
```
|
TheBloke/Robin-7B-v2-SuperHOT-8K-fp16
|
TheBloke
| 2023-07-07T12:00:39Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-06T16:48:21Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 7B v2 fp16
These are fp16 pytorch format model files for [OptimalScale's Robin 7B v2](https://huggingface.co/TheBloke/robin-7B-v2-fp16) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Robin-7B-v2-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Robin-7B-v2-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Robin-7B-v2-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OptimalScale/robin-7b-v2-delta)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Robin-7B-v2-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: OptimalScale's Robin 7B v2
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# OptimalScale's Robin 7B v2 fp16
These files are pytorch format fp16 model files for [OptimalScale's Robin 7B v2](https://huggingface.co/OptimalScale/robin-7b-v2-delta).
It is the result of merging and/or converting the source repository to float16.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-7B-v2-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-7B-v2-fp16)
## Prompt template
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions
###Human: prompt
###Assistant:
```
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: OptimalScale's Robin 7B v2
No model card provided in source repository.
|
TheBloke/Guanaco-7B-SuperHOT-8K-fp16
|
TheBloke
| 2023-07-07T12:00:37Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-06T15:48:47Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Tim Dettmers' Guanaco 7B fp16
These are fp16 pytorch format model files for [Tim Dettmers' Guanaco 7B](https://huggingface.co/TheBloke/guanaco-7B-HF) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Guanaco-7B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Guanaco-7B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Guanaco-7B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/timdettmers/guanaco-7b)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Guanaco-7B-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Tim Dettmers' Guanaco 7B
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Tim Dettmers' Guanaco 7B fp16 HF
These files are fp16 HF model files for [Tim Dettmers' Guanaco 7B](https://huggingface.co/timdettmers/guanaco-7b).
It is the result of merging the LoRA then saving in HF fp16 format.
## Other repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-7B-GPTQ)
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/guanaco-7B-GGML)
* [Merged, unquantised fp16 model in HF format](https://huggingface.co/TheBloke/guanaco-7B-HF)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card
Not provided by original model creator.
|
TheBloke/Baize-v2-7B-SuperHOT-8K-fp16
|
TheBloke
| 2023-07-07T12:00:37Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"arxiv:2304.01196",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-06T15:35:11Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Project Baize's Baize 7B v2 fp16
These are fp16 pytorch format model files for [Project Baize's Baize 7B v2](https://huggingface.co/project-baize/baize-v2-7b) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 7b LoRA](https://huggingface.co/kaiokendev/superhot-7b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Baize-v2-7B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Baize-v2-7B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Baize-v2-7B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/project-baize/baize-v2-7b)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Baize-v2-7B-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, a NSFW focused LoRA, this time 7B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
#### Looking for Merged & Quantized Models?
Make some please :)
#### Using the monkey-patch?
You will **NEED** to **apply the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
The monkeypatch is only necessary if you are using a front-end/back-end that does not already support scaling and said front-end/back-end is Python-based (i.e. Huggingface Transformers). To apply the patch, you will need to copy the `llama_rope_scaled_monkey_patch.py` into your working directory and call the exported function `replace_llama_rope_with_scaled_rope` at the very start of your Python program. It will modify the Transformers library's implementation of RoPE to properly apply the scaling factor.
#### Using Oobabooga with Exllama?
Switch your loader to `exllama` or `exllama_hf` Add the arguments `max_seq_len 8192` and `compress_pos_emb 4`. **While the model may work well with `compress_pos_emb 2`, it was trained on 4, so that is what I advocate for you to use**
Example in the command-line:
- `python server.py --max_seq_len 8192 --compress_pos_emb 4 --loader exllama_hf`
In the UI, you will see the loader option in the `Models` tab. Once you select either `exllama` or `exllama_hf`, the `max_seq_len` and `compress_pos_emb` settings will appear.
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
- Cutoff length: 4096
# Original model card: Project Baize's Baize 7B v2
<p align="center">
<img width="500px" alt="Project Baize" src="https://user-images.githubusercontent.com/22514219/229195563-0cddfa74-e52f-4413-b4b4-e4ba489c4b3d.png">
</p>
<hr>
## ⚠️Warning
Using Baize checkpoints directly without the following format will not work.
```
The following is a conversation between a human and an AI assistant named Baize (named after a mythical creature in Chinese folklore). Baize is an open-source AI assistant developed by UCSD and Sun Yat-Sen University. The human and the AI assistant take turns chatting. Human statements start with [|Human|] and AI assistant statements start with [|AI|]. The AI assistant always provides responses in as much detail as possible, and in Markdown format. The AI assistant always declines to engage with topics, questions and instructions related to unethical, controversial, or sensitive issues. Complete the transcript in exactly that format.\n[|Human|]Hello!\n[|AI|]Hi!
```
`[|Human|]` and `[|AI|]` are required to mark the messages from the user and Baize. We recommend checking out our [GitHub](https://github.com/project-baize/baize) to find the best way to use Baize with our demo or Fastchat.
## Demo
https://huggingface.co/spaces/project-baize/chat-with-baize
## What's Baize?
Baize is an open-source chat model fine-tuned with [LoRA](https://github.com/microsoft/LoRA). This model is a **7B Baize-v2**, trained with supervised fine-tuning (SFT) and self-distillation with feedback (SDF). This checkpoint has been merged with LLaMA so it's ready for use.
## Why it's called Baize?
Baize (白泽) is a mythical creature in Chinese folklore, who speaks human languages and knows everything. This is exactly what we expect from a chat model.
## How to use it: local demo, API and SDK
More details can be found in the Baize [GitHub](https://github.com/project-baize/baize) and [Paper](https://arxiv.org/abs/2304.01196).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.