modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-27 12:31:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-27 12:31:13
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Trelis/TinyLlama-1.1B-Chat-v0.3-AWQ
|
Trelis
| 2023-10-03T15:40:25Z | 100 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"awq",
"tinyllama",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T15:26:36Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25
language:
- en
tags:
- awq
- tinyllama
---
# AWQ version of TinyLlama at 1Trillion tokens
original model card follows below.
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [PY007/TinyLlama-1.1B-intermediate-step-480k-1T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T).
The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25) following the [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format.
#### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```
from transformers import AutoTokenizer
import transformers
import torch
model = "PY007/TinyLlama-1.1B-Chat-v0.3"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = "How to get in a good university?"
formatted_prompt = (
f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
)
sequences = pipeline(
formatted_prompt,
do_sample=True,
top_k=50,
top_p = 0.9,
num_return_sequences=1,
repetition_penalty=1.1,
max_new_tokens=1024,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
|
BetterThanNothing/EXo
|
BetterThanNothing
| 2023-10-03T15:36:51Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-03T15:19:39Z |
---
license: creativeml-openrail-m
---
|
nickapch/bert-base-uncased-finetuned-clinc_oos
|
nickapch
| 2023-10-03T15:34:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-02T14:00:38Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-clinc_oos
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value:
accuracy: 0.932258064516129
- name: F1
type: f1
value:
f1: 0.9301680056033511
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-clinc_oos
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7744
- Accuracy: {'accuracy': 0.932258064516129}
- F1: {'f1': 0.9301680056033511}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------------:|
| 4.2947 | 1.0 | 954 | 2.1707 | {'accuracy': 0.8312903225806452} | {'f1': 0.8144079203282508} |
| 1.7379 | 2.0 | 1908 | 1.0298 | {'accuracy': 0.9209677419354839} | {'f1': 0.9177062984730477} |
| 0.8752 | 3.0 | 2862 | 0.7744 | {'accuracy': 0.932258064516129} | {'f1': 0.9301680056033511} |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
otrturn/xlm-roberta-base-finetuned-panx-de
|
otrturn
| 2023-10-03T15:30:25Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-03T15:07:07Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8593298179962578
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1379
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2758 | 1.0 | 525 | 0.1567 | 0.8284 |
| 0.1301 | 2.0 | 1050 | 0.1334 | 0.8534 |
| 0.081 | 3.0 | 1575 | 0.1379 | 0.8593 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
olesya2096/reports_gen
|
olesya2096
| 2023-10-03T15:25:09Z | 205 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T15:24:49Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: reports_gen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reports_gen
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
agustin228/pokemon_classification
|
agustin228
| 2023-10-03T15:14:13Z | 215 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:pokemon-classification",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T05:16:06Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- pokemon-classification
metrics:
- accuracy
model-index:
- name: pokemon_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: pokemon-classification
type: pokemon-classification
config: full
split: train[:4800]
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.8927083333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pokemon_classification
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pokemon-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7861
- Accuracy: 0.8927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 240 | 2.0497 | 0.7542 |
| No log | 2.0 | 480 | 0.9561 | 0.8760 |
| 2.3345 | 3.0 | 720 | 0.7754 | 0.8917 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
cbellew09/ppo-Huggy
|
cbellew09
| 2023-10-03T15:10:52Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-10-03T15:10:48Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: cbellew09/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
avojarot/distilhubert-finetuned-gtzan
|
avojarot
| 2023-10-03T14:43:50Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-10-03T14:43:43Z |
---
base_model: distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.99
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan-finetuned-gtzan
This model is a fine-tuned version of [distilhubert](https://huggingface.co/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0713
- Accuracy: 0.99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0972 | 1.0 | 113 | 0.0982 | 0.99 |
| 0.0478 | 2.0 | 226 | 0.0713 | 0.99 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
IkariDev/Athena-v3-GGUF
|
IkariDev
| 2023-10-03T14:39:34Z | 33 | 1 | null |
[
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-09-26T20:21:27Z |
---
license: cc-by-nc-4.0
---

Experimental Athena v3 model. Use Alpaca format. Suitable for RP, ERP and general stuff.
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains GGUF files of Athena-V3.
[GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-GGUF)
[GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-GPTQ)
<!-- [exl2 - by AzureBlack](https://huggingface.co/AzureBlack/Athena-v2-6.0bit-exl2) -->
[AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-AWQ)
[fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v3)
<!-- [GGUF - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v3-GGUF) -->
[OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v3-GGUF)
## Ratings:
Note: I have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!
https://snombler.neocities.org/logs#athenav3
<!-- description end -->
<!-- description start -->
## Models and loras used
- Athena-v2
- migtissera/Synthia-13B-v1.2
- The-Face-Of-Goonery/Huginn-13b-FP16
- PygmalionAI/pygmalion-2-13b
- The-Face-Of-Goonery/LegerDemain-FP16
- chargoddard/storytime-13b
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
- zattio770/120-Days-of-LORA-v2-13B
```
Loras: [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT(0.65) + zattio770/120-Days-of-LORA-v2-13B(0.35)](0.3) to the final model
+ [Athena-v2(0.70) + migtissera/Synthia-13B-v1.2(0.3)](0.5)
+ [The-Face-Of-Goonery/Huginn-13b-FP16(0.85) + PygmalionAI/pygmalion-2-13b](0.15)](0.40)
+ [The-Face-Of-Goonery/LegerDemain-FP16(0.3) chargoddard/storytime-13b(0.7)](0.10)
```
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
HUGE thanks to [Undi95](https://huggingface.co/Undi95) for doing the merging (Recipe was my idea, he merged)
To TheBloke: please if you quant this, please include [IkariDev](https://huggingface.co/IkariDev) + [Undi95](https://huggingface.co/Undi95) in all the credits/links to the creator.
|
IkariDev/Athena-v3
|
IkariDev
| 2023-10-03T14:39:22Z | 1,586 | 13 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-26T18:59:34Z |
---
license: cc-by-nc-4.0
---

Experimental Athena v3 model. Use Alpaca format. Suitable for RP, ERP and general stuff.
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains fp16 files of Athena-V3.
[GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-GGUF)
[GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-GPTQ)
<!-- [exl2 - by AzureBlack](https://huggingface.co/AzureBlack/Athena-v2-6.0bit-exl2) -->
[AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-AWQ)
[fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v3)
<!-- [GGUF - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v3-GGUF) -->
[OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v3-GGUF)
## Ratings:
Note: I have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!
https://snombler.neocities.org/logs#athenav3
<!-- description end -->
<!-- description start -->
## Models and loras used
- Athena-v2
- migtissera/Synthia-13B-v1.2
- The-Face-Of-Goonery/Huginn-13b-FP16
- PygmalionAI/pygmalion-2-13b
- The-Face-Of-Goonery/LegerDemain-FP16
- chargoddard/storytime-13b
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
- zattio770/120-Days-of-LORA-v2-13B
```
Loras: [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT(0.65) + zattio770/120-Days-of-LORA-v2-13B(0.35)](0.3) to the final model
+ [Athena-v2(0.70) + migtissera/Synthia-13B-v1.2(0.3)](0.5)
+ [The-Face-Of-Goonery/Huginn-13b-FP16(0.85) + PygmalionAI/pygmalion-2-13b](0.15)](0.40)
+ [The-Face-Of-Goonery/LegerDemain-FP16(0.3) chargoddard/storytime-13b(0.7)](0.10)
```
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
HUGE thanks to [Undi95](https://huggingface.co/Undi95) for doing the merging (Recipe was my idea, he merged)
To TheBloke: please if you quant this, please include [IkariDev](https://huggingface.co/IkariDev) + [Undi95](https://huggingface.co/Undi95) in all the credits/links to the creator.
|
sooh098/bert-finetuned-squad
|
sooh098
| 2023-10-03T14:30:24Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-03T12:08:40Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
f4ken0name/ppo-LunarLander-v2
|
f4ken0name
| 2023-10-03T14:28:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T14:25:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 297.89 +/- 11.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
abinayam/gpt-2-tamil
|
abinayam
| 2023-10-03T14:27:50Z | 498 | 12 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"ta",
"dataset:oscar",
"dataset:IndicNLP",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: ta
datasets:
- oscar
- IndicNLP
widget:
- text: 'ஒரு ஊரிலே ஒரு காக்கைக்கு'
---
# GPT2-Tamil
This repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language.
## Setup:
To setup the project, run the following command,
```python
pip install -r requirements.txt
```
## Model:
Pretrained model on Tamil language using a causal language modeling (CLM) objective.
## Dataset Used:
The GTP-2 model is trained on [oscar dataset - ta](https://huggingface.co/datasets/oscar) and [IndicNLP dataset - ta](https://indicnlp.ai4bharat.org/corpora/)
## Intended uses & limitations:
You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
## How to pretrain the model:
To perform training, do the following steps,
- Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.)
```python
>>> export MODEL_DIR=<model_dir>
```
- Create the config.json by running the following command,
```python
>>> python src/create_config.py
```
- Create the tokenizer by running the following command,
```python
>>> python src/train_tokenizer.py
```
- Once the config and tokenizer is created, run the following script to start training the flax model
```python
>>> python scripts/train_gpt2-oscar-tamil.sh
```
## How to use:
To perform language generation using the model, pipeline can be used directly.
- First convert the flax model to pytorch using the following command,
```python
python src/convert_flax_to_pytorch.py
```
- Use the following snippet to perform language generation,
```python
>>> from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
>>> model_name = 'abinayam/gpt-2-tamil'
>>> model = AutoModelWithLMHead.from_pretrained(model_name)
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
>>> set_seed(42)
>>> input_text = "ஒரு ஊரிலே ஒரு காக்கைக்கு"
>>> max_len = 300
>>> no_seq = 5
>>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
>>> sequence = generator(input_text, max_length=max_len, num_return_sequences=no_seq)
```
|
NebulaSense/ContractAssist
|
NebulaSense
| 2023-10-03T14:18:46Z | 0 | 3 |
transformers
|
[
"transformers",
"en",
"dataset:NebulaSense/Legal_Clause_Instructions",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-09-29T13:36:20Z |
---
language:
- en
library_name: transformers
license: cc-by-nc-4.0
datasets:
- NebulaSense/Legal_Clause_Instructions
---
# Model Card for ContractAssist model
<!-- Provide a quick summary of what the model is/does. [Optional] -->
Instruction tuned FlanT5-XXL on Legal Clauses data generated via ChatGPT. The model is capable for generating and/or modifying the Legal Clauses.
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
- **Developed by:** Jaykumar Kasundra, Shreyans Dhankhar
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** other
- **Resources for more information:**
- [Associated Paper](<Add Link>) ----> Details to be added soon!!
# Uses
</details>
### Prompt
<details>
<summary> Click to expand </summary>
```python
{
"prompt_input": (
"Below is an instruction that describes a task, paired with an input that provides further context. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
),
"prompt_no_input": (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response:\n"
),
}
```
</details>
### Running the model on a GPU in 8bit
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate peft bitsandbytes
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from peft import PeftModel,PeftConfig
peft_model_id = 'NebulaSense/ContractAssist'
peft_config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(peft_config.base_model_name_or_path, device_map="auto",load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(peft_config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, peft_model_id)
model.eval()
```
</details>
### Example
<details>
<summary> Click to expand </summary>
```text
### Instruction:
Create a legal exhibit for the list of confidential information for Non-Disclosure Agreement (NDA) in Employment industry
### Input:
### Response:
Exhibit A: List of Confidential Information
The following information is considered confidential and proprietary to the Company:
1. Trade secrets, including but not limited to, formulas, patterns, compilations, programs, devices, methods, techniques, and processes, and any other information that is not generally known to the public.
2. Customer lists, including personal information, financial information, and other information related to the customer's business.
3. Marketing and sales strategies, including customer preferences, buying habits, and sales forecasts.
4. Financial information, including budgets, forecasts, and financial statements.
5. Intellectual property, including patents, trademarks, copyrights, trade names, and service marks.
6. Any other information designated as confidential by the Company in writing.
The Employee agrees to maintain the confidentiality of all such information and not to disclose it to any third party without the prior written consent of the Company.
The employee further agrees not to use any such information for any purpose other than as necessary to perform their duties for the Company, except as required by law.
This Exhibited List of Information is incorporated into and made a part of the Non-Disclosure Agreement between the Company and the Employee.
```
</details>
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
The model can directly be used to generate/modify legal clauses and help assist in drafting contracts. It likely works best on english language.
## Compute Infrastructure
Amazon SageMaker Training Job.
### Hardware
1 x 24GB NVIDIA A10G
### Software
Transformers, PEFT, BitsandBytes
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:** ---> Details to be added soon!!
# Model Card Authors
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
Jaykumar Kasundra, Shreyans Dhankhar
|
Faradaylab/ARIA-7B-V3-mistral-french
|
Faradaylab
| 2023-10-03T14:12:53Z | 3 | 4 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-02T17:30:16Z |
---
library_name: peft
---
## Training procedure
We decided to release an ARIA 7B model trained with mistral 7B instruct as base model. We adressed the language challenge with a dataset focused on french language.
The finetuning has been done with Nvidia GPUs.
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
djtar/LunarLander_de
|
djtar
| 2023-10-03T14:07:44Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-02T11:57:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 285.28 +/- 25.13
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Hwilner/a2c-PandaReachDense-v3
|
Hwilner
| 2023-10-03T14:06:27Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T13:59:38Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Bjqrn/q-taxi-v3
|
Bjqrn
| 2023-10-03T14:05:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T14:05:08Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Bjqrn/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Bjqrn/q-FrozenLake-v1-4x4-noSlippery
|
Bjqrn
| 2023-10-03T14:02:21Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T14:02:18Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Bjqrn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
casque/mistoonSapphire_v20
|
casque
| 2023-10-03T14:00:59Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-03T13:21:09Z |
---
license: creativeml-openrail-m
---
|
Jun-Wu/test_peft_model
|
Jun-Wu
| 2023-10-03T13:42:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T13:42:36Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
alphakavi22772023/bloom_tc_gen_01
|
alphakavi22772023
| 2023-10-03T13:31:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T13:31:06Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
faldeus0092/tyre-classification-efficientnetb7
|
faldeus0092
| 2023-10-03T13:18:41Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-10-03T12:46:41Z |
---
license: apache-2.0
metrics:
- accuracy
---
# image_classification
(this model was not trained using Trainer API)
This model is a fine-tuned version of [EfficientNetB7](https://github.com/lukemelas/EfficientNet-PyTorch) on the [Tyre-Quality-Classification](https://www.kaggle.com/datasets/warcoder/tyre-quality-classification/code) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2341
- Accuracy: 91.9355%
## Intended uses & limitations
Can be used for quality control to identify the condition of tyres
## Training and evaluation data
Data can be seen at [Weights and Biases](https://wandb.ai/faldeus0092/efficientnetb7_tyrequality_classifier/runs/1z5mnxps/overview?workspace=user-faldeus0092)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- train_set: 1434
- test_set: 372
- optimizer: SGD with momentum = 0.9
- num_epochs: 5
### Example usage
```py
from efficientnet_pytorch import EfficientNet
import torch
import torchvision.transforms as transforms
model = EfficientNet.from_name('efficientnet-b7')
model._fc= torch.nn.Linear(in_features=model._fc.in_features, out_features=len(annotations_map), bias=True)
model.load_state_dict(torch.load('/content/efficientnetb7_tyrequality_classifier.pth'))
model.eval()
img = Image.open('/content/defective-tires-cause-accidents-min.jpg')
test_transform = transforms.Compose([
transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
input_data = test_transform(img).unsqueeze(0)
with torch.no_grad():
output = model(input_data)
_, predicted_class = torch.max(output, 1)
probs = torch.nn.functional.softmax(output, dim=1)
conf, _ = torch.max(probs, 1)
print('Predicted Class:', predicted_class.item())
print('Predicted Label:', id2label[predicted_class.item()])
print(f'Confidence: {conf.item()*100}%')
plt.title(id2label[predicted_class.item()])
plt.axis("off")
plt.imshow(img)
plt.show()
```
|
sayan1101/llama-2-13b-subject-no-overflow
|
sayan1101
| 2023-10-03T13:15:34Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T13:09:12Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Tommert25/robbert0210_lrate5b32
|
Tommert25
| 2023-10-03T13:12:22Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:pdelobelle/robbert-v2-dutch-base",
"base_model:finetune:pdelobelle/robbert-v2-dutch-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-03T12:50:30Z |
---
license: mit
base_model: pdelobelle/robbert-v2-dutch-base
tags:
- generated_from_trainer
metrics:
- recall
- accuracy
model-index:
- name: robbert0210_lrate5b32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbert0210_lrate5b32
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3497
- Precisions: 0.8168
- Recall: 0.7629
- F-measure: 0.7745
- Accuracy: 0.9044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precisions | Recall | F-measure | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:----------:|:------:|:---------:|:--------:|
| No log | 1.0 | 118 | 0.4100 | 0.8801 | 0.6728 | 0.6931 | 0.8747 |
| No log | 2.0 | 236 | 0.3638 | 0.7841 | 0.7186 | 0.7176 | 0.8871 |
| No log | 3.0 | 354 | 0.3533 | 0.8013 | 0.7568 | 0.7535 | 0.8967 |
| No log | 4.0 | 472 | 0.3497 | 0.8168 | 0.7629 | 0.7745 | 0.9044 |
| 0.3409 | 5.0 | 590 | 0.3781 | 0.7928 | 0.7789 | 0.7814 | 0.9046 |
| 0.3409 | 6.0 | 708 | 0.4072 | 0.8013 | 0.7836 | 0.7884 | 0.9073 |
| 0.3409 | 7.0 | 826 | 0.4193 | 0.8047 | 0.8026 | 0.8012 | 0.9082 |
| 0.3409 | 8.0 | 944 | 0.4197 | 0.8121 | 0.8021 | 0.8049 | 0.9103 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
BenjaminKUL/new_model
|
BenjaminKUL
| 2023-10-03T12:48:14Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-10T09:58:47Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: new_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# new_model
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0200
- Answer: {'precision': 0.16666666666666666, 'recall': 0.16666666666666666, 'f1': 0.16666666666666666, 'number': 6}
- Header: {'precision': 0.1111111111111111, 'recall': 0.2, 'f1': 0.14285714285714285, 'number': 10}
- Question: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9}
- Overall Precision: 0.0769
- Overall Recall: 0.12
- Overall F1: 0.0938
- Overall Accuracy: 0.7246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.1674 | 3.08 | 200 | 0.0152 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | 0.0 | 0.0 | 0.0 | 0.6087 |
| 0.0579 | 6.15 | 400 | 0.0141 | {'precision': 0.2222222222222222, 'recall': 0.3333333333333333, 'f1': 0.26666666666666666, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.08333333333333333, 'recall': 0.1111111111111111, 'f1': 0.09523809523809525, 'number': 9} | 0.1071 | 0.12 | 0.1132 | 0.6522 |
| 0.0271 | 9.23 | 600 | 0.0121 | {'precision': 0.3333333333333333, 'recall': 0.16666666666666666, 'f1': 0.2222222222222222, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.07142857142857142, 'recall': 0.1111111111111111, 'f1': 0.08695652173913043, 'number': 9} | 0.0645 | 0.08 | 0.0714 | 0.6957 |
| 0.0122 | 12.31 | 800 | 0.0115 | {'precision': 0.16666666666666666, 'recall': 0.16666666666666666, 'f1': 0.16666666666666666, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.06666666666666667, 'recall': 0.1111111111111111, 'f1': 0.08333333333333334, 'number': 9} | 0.0513 | 0.08 | 0.0625 | 0.7391 |
| 0.0073 | 15.38 | 1000 | 0.0224 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.1111111111111111, 'recall': 0.1111111111111111, 'f1': 0.1111111111111111, 'number': 9} | 0.0526 | 0.04 | 0.0455 | 0.6739 |
| 0.0044 | 18.46 | 1200 | 0.0165 | {'precision': 0.25, 'recall': 0.16666666666666666, 'f1': 0.2, 'number': 6} | {'precision': 0.14285714285714285, 'recall': 0.2, 'f1': 0.16666666666666666, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | 0.1 | 0.12 | 0.1091 | 0.7246 |
| 0.0024 | 21.54 | 1400 | 0.0170 | {'precision': 0.2, 'recall': 0.16666666666666666, 'f1': 0.1818181818181818, 'number': 6} | {'precision': 0.1111111111111111, 'recall': 0.2, 'f1': 0.14285714285714285, 'number': 10} | {'precision': 0.058823529411764705, 'recall': 0.1111111111111111, 'f1': 0.07692307692307691, 'number': 9} | 0.1 | 0.16 | 0.1231 | 0.7319 |
| 0.001 | 24.62 | 1600 | 0.0190 | {'precision': 0.4, 'recall': 0.3333333333333333, 'f1': 0.3636363636363636, 'number': 6} | {'precision': 0.13333333333333333, 'recall': 0.2, 'f1': 0.16, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | 0.1212 | 0.16 | 0.1379 | 0.7536 |
| 0.0009 | 27.69 | 1800 | 0.0203 | {'precision': 0.16666666666666666, 'recall': 0.16666666666666666, 'f1': 0.16666666666666666, 'number': 6} | {'precision': 0.16666666666666666, 'recall': 0.3, 'f1': 0.21428571428571427, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | 0.1026 | 0.16 | 0.125 | 0.7101 |
| 0.0006 | 30.77 | 2000 | 0.0210 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.05555555555555555, 'recall': 0.1, 'f1': 0.07142857142857142, 'number': 10} | {'precision': 0.0625, 'recall': 0.1111111111111111, 'f1': 0.08, 'number': 9} | 0.0526 | 0.08 | 0.0635 | 0.7174 |
| 0.0005 | 33.85 | 2200 | 0.0194 | {'precision': 0.16666666666666666, 'recall': 0.16666666666666666, 'f1': 0.16666666666666666, 'number': 6} | {'precision': 0.1111111111111111, 'recall': 0.2, 'f1': 0.14285714285714285, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | 0.0769 | 0.12 | 0.0938 | 0.7174 |
| 0.0003 | 36.92 | 2400 | 0.0200 | {'precision': 0.16666666666666666, 'recall': 0.16666666666666666, 'f1': 0.16666666666666666, 'number': 6} | {'precision': 0.1111111111111111, 'recall': 0.2, 'f1': 0.14285714285714285, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | 0.0769 | 0.12 | 0.0938 | 0.7246 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.1.0.dev20230810
- Datasets 2.14.4
- Tokenizers 0.11.0
|
hyeju/sd-emoji-model-lora
|
hyeju
| 2023-10-03T12:45:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-10-02T10:09:28Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hyeju/sd-emoji-model-lora
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the valhalla/emoji-dataset dataset. You can find some example images in the following.




|
Plurigrid/meso
|
Plurigrid
| 2023-10-03T12:42:37Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-10-01T13:00:32Z |
---
license: other
license_name: anarchy
license_link: >-
https://gist.githubusercontent.com/bmorphism/38631411577cf84eab7e1d2b3d7b180b/raw/d180b1d0cc40f518a120c875d53f6552807b8c43/n-1.md
---
|
Tommert25/robbert0210_lrate5b8
|
Tommert25
| 2023-10-03T12:41:33Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:pdelobelle/robbert-v2-dutch-base",
"base_model:finetune:pdelobelle/robbert-v2-dutch-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-03T12:18:05Z |
---
license: mit
base_model: pdelobelle/robbert-v2-dutch-base
tags:
- generated_from_trainer
metrics:
- recall
- accuracy
model-index:
- name: robbert0210_lrate5b8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbert0210_lrate5b8
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3595
- Precisions: 0.8049
- Recall: 0.7503
- F-measure: 0.7661
- Accuracy: 0.8987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precisions | Recall | F-measure | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:----------:|:------:|:---------:|:--------:|
| No log | 1.0 | 471 | 0.4152 | 0.8420 | 0.6873 | 0.6895 | 0.8762 |
| 0.6244 | 2.0 | 942 | 0.3595 | 0.8049 | 0.7503 | 0.7661 | 0.8987 |
| 0.3086 | 3.0 | 1413 | 0.3926 | 0.8113 | 0.7757 | 0.7838 | 0.9110 |
| 0.1676 | 4.0 | 1884 | 0.4826 | 0.7805 | 0.7526 | 0.7628 | 0.9025 |
| 0.1044 | 5.0 | 2355 | 0.5530 | 0.8001 | 0.7627 | 0.7769 | 0.9028 |
| 0.0571 | 6.0 | 2826 | 0.5910 | 0.7945 | 0.7661 | 0.7765 | 0.9103 |
| 0.0321 | 7.0 | 3297 | 0.6168 | 0.8251 | 0.7733 | 0.7899 | 0.9124 |
| 0.0234 | 8.0 | 3768 | 0.6218 | 0.8108 | 0.7748 | 0.7881 | 0.9116 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ranajithore/stable-diffusion-v2-1-trained-for-plant-cell-structure-diagram-without-captions-new
|
ranajithore
| 2023-10-03T12:41:18Z | 27 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-03T12:35:20Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### stable-diffusion-v2.1-trained-for-plant-cell-structure-diagram-without-captions-new Dreambooth model trained by ranajithore with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
s3nh/hunkim-NousResearch-Llama-2-7b-hf-ko-7-koalpaca-v1.1a-kopen-platypus-GGUF
|
s3nh
| 2023-10-03T12:13:47Z | 99 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T12:07:30Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/hunkim/NousResearch-Llama-2-7b-hf-ko-7-koalpaca-v1.1a-kopen-platypus).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
MerziaAdamjee/codellama2-finetuned-spiderdata-copy
|
MerziaAdamjee
| 2023-10-03T12:13:21Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T10:19:33Z |
---
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- generated_from_trainer
model-index:
- name: codellama2-finetuned-spiderdata-copy
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codellama2-finetuned-spiderdata-copy
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
|
satyanshu404/bart-large-mnli-Kaggle-Science-LLM-finetuned
|
satyanshu404
| 2023-10-03T12:01:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text-classification",
"generated_from_trainer",
"base_model:facebook/bart-large-mnli",
"base_model:finetune:facebook/bart-large-mnli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-03T09:11:38Z |
---
license: mit
base_model: facebook/bart-large-mnli
tags:
- generated_from_trainer
model-index:
- name: bart-large-mnli-Kaggle-Science-LLM-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-mnli-Kaggle-Science-LLM-finetuned
This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7865 | 1.0 | 800 | 1.1187 |
| 0.6785 | 2.0 | 1600 | 1.2005 |
| 0.774 | 3.0 | 2400 | 1.1685 |
| 0.4621 | 4.0 | 3200 | 1.3130 |
| 0.4138 | 5.0 | 4000 | 2.2119 |
| 0.3162 | 6.0 | 4800 | 2.0261 |
| 0.2778 | 7.0 | 5600 | 1.9403 |
| 0.2476 | 8.0 | 6400 | 2.5232 |
| 0.1718 | 9.0 | 7200 | 2.6737 |
| 0.0869 | 10.0 | 8000 | 2.7109 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Prajwal777/taxi-v3
|
Prajwal777
| 2023-10-03T11:52:27Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T11:52:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Prajwal777/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
shettymanya/Taxi-v3
|
shettymanya
| 2023-10-03T11:51:02Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T11:51:00Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="shettymanya/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
vishwam-101/q-FrozenLake-v1-4x4-noSlippery
|
vishwam-101
| 2023-10-03T11:49:44Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T11:40:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="vivas-1001/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
amtsal/image_classification
|
amtsal
| 2023-10-03T11:49:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-17T14:03:40Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.55625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3283
- Accuracy: 0.5563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 1.4437 | 0.4813 |
| No log | 2.0 | 80 | 1.3919 | 0.4813 |
| No log | 3.0 | 120 | 1.3595 | 0.5125 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
AdityaHR/Taxi_v3
|
AdityaHR
| 2023-10-03T11:47:14Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T11:47:13Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AdityaHR/Taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
aloobun/llama2-7b-guanaco-GGUF
|
aloobun
| 2023-10-03T11:47:07Z | 4 | 0 |
transformers
|
[
"transformers",
"llama",
"llama2",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2023-10-01T18:40:56Z |
---
license: llama2
tags:
- llama2
---
|
sd-cv/Wk1As1Sd-Cv
|
sd-cv
| 2023-10-03T11:46:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T11:46:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Tommert25/robbert0210_lrate2.5b16
|
Tommert25
| 2023-10-03T11:44:09Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:pdelobelle/robbert-v2-dutch-base",
"base_model:finetune:pdelobelle/robbert-v2-dutch-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-10-03T11:22:56Z |
---
license: mit
base_model: pdelobelle/robbert-v2-dutch-base
tags:
- generated_from_trainer
metrics:
- recall
- accuracy
model-index:
- name: robbert0210_lrate2.5b16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbert0210_lrate2.5b16
This model is a fine-tuned version of [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3449
- Precisions: 0.7846
- Recall: 0.7358
- F-measure: 0.7356
- Accuracy: 0.8988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precisions | Recall | F-measure | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:----------:|:------:|:---------:|:--------:|
| No log | 1.0 | 236 | 0.4364 | 0.8256 | 0.6672 | 0.6709 | 0.8658 |
| No log | 2.0 | 472 | 0.3745 | 0.6875 | 0.7116 | 0.6970 | 0.8839 |
| 0.5514 | 3.0 | 708 | 0.3449 | 0.7846 | 0.7358 | 0.7356 | 0.8988 |
| 0.5514 | 4.0 | 944 | 0.3625 | 0.8042 | 0.7487 | 0.7552 | 0.9000 |
| 0.2255 | 5.0 | 1180 | 0.3987 | 0.8037 | 0.7541 | 0.7618 | 0.9000 |
| 0.2255 | 6.0 | 1416 | 0.4315 | 0.8049 | 0.7549 | 0.7636 | 0.9010 |
| 0.1211 | 7.0 | 1652 | 0.4060 | 0.8170 | 0.7633 | 0.7785 | 0.9034 |
| 0.1211 | 8.0 | 1888 | 0.4146 | 0.8162 | 0.7813 | 0.7927 | 0.9070 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
tdklab/hebert-finetuned-hebrew-metaphor
|
tdklab
| 2023-10-03T11:43:56Z | 54 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"he",
"dataset:tdklab/HebrewMetaphors",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-17T07:19:58Z |
---
language: he
datasets:
- tdklab/HebrewMetaphors
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hebert-finetuned-hebrew-metaphor
results: []
widget:
- text: "לבשל [SEP] שישי בבוקר זה זמן טוב כדי לבשל ארוחה יפה"
- text: "לטחון [SEP] להכנת קפה במקינטה יש לטחון את הקפה טחינה גסה יותר מאשר קפה לאספרסו"
- text: "לטחון [SEP] תעירו אותי כשיקרה עוד משהו מעניין, בינתיים אין מה לטחון את זה"
- text: "לבשל [SEP] השחקן השתמש ביכולותיו הפיזיות, הגובה והקפיצה שלו, כדי לבשל ולהבקיע שערים"
---
# hebert-finetuned-hebrew-metaphor
The model is fine-tuned to determine if a word in a sentence is used metaphorically or literally.
The model was trained for the following verbs:
לחלום, לחתוך, לעוף, לפרק, להדליק, לכבס, לכופף, לרסק, לבשל, למחוק, לקפוץ, לקרוע, לקצור, לרקוד, לשבור, לשדוד, לשתות, לטחון, לתפור, לזרוע
This model is a fine-tuned version of [avichr/heBERT](https://huggingface.co/avichr/heBERT) on HebrewMetaphors dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4682
- Accuracy: 0.9510
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 389 | 0.1813 | 0.9379 |
| 0.2546 | 2.0 | 778 | 0.2309 | 0.9479 |
| 0.08 | 3.0 | 1167 | 0.3342 | 0.9492 |
| 0.0298 | 4.0 | 1556 | 0.4076 | 0.9460 |
| 0.0298 | 5.0 | 1945 | 0.3803 | 0.9485 |
| 0.0105 | 6.0 | 2334 | 0.3674 | 0.9454 |
| 0.0077 | 7.0 | 2723 | 0.5356 | 0.9410 |
| 0.0088 | 8.0 | 3112 | 0.4776 | 0.9422 |
| 0.0044 | 9.0 | 3501 | 0.4258 | 0.9504 |
| 0.0044 | 10.0 | 3890 | 0.4305 | 0.9523 |
| 0.001 | 11.0 | 4279 | 0.4357 | 0.9548 |
| 0.0031 | 12.0 | 4668 | 0.4770 | 0.9473 |
| 0.0015 | 13.0 | 5057 | 0.4604 | 0.9523 |
| 0.0015 | 14.0 | 5446 | 0.4670 | 0.9510 |
| 0.0022 | 15.0 | 5835 | 0.4682 | 0.9510 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
### About Us
Created by Doron Ben-chorin, Matan Ben-chorin, Tomer Tzipori, Guided by Dr. Oren Mishali. This is our project as part of computer engineering studies in the Faculty of Electrical Engineering combined with Computer Science at Technion, Israel Institute of Technology. For more cooperation, please contact email:
Doron Ben-chorin: doronbh7@gmail.com
Matan Ben-chorin: matan.bh1@gmail.com
Tomer Tzipori: TomerTzipori@gmail.com
|
AndriLawrence/gpt2-chatkobi-ai
|
AndriLawrence
| 2023-10-03T11:42:46Z | 0 | 0 | null |
[
"medical",
"id",
"license:gpl-3.0",
"region:us"
] | null | 2023-08-10T07:49:48Z |
---
license: gpl-3.0
language:
- id
tags:
- medical
---
# GPT-2 ChatKobi
## Model Description
The GPT-2 ChatKobi model is a natural language processing model that has been fine-tuned using health-related question and answer data in Indonesian language. This model is capable of providing responses based on questions related to health topics.
## Model Quality and Limitations
This model is capable of providing fairly accurate responses for general health-related questions. However, like all natural language processing models, there is a possibility of providing inaccurate or less relevant answers depending on the context of the question.
## Recommended Use Cases
This model is recommended for use in scenarios where users require general answers or information about health. However, this model is not suitable for providing specific medical advice. DO WITH YOUR OWN RISK!
## How to Use the Model
You can use this model by utilizing the library provided by Hugging Face. Below is an example of how to use this model in Python using [llm-rs](https://github.com/LLukas22/llm-rs-python):
```python
from llm_rs import AutoModel, SessionConfig, GenerationConfig
session_config = SessionConfig(
threads=4,
context_length=800,
prefer_mmap=False)
generation_config = GenerationConfig(
top_p=0.88,
top_k=1,
temperature=0.4,
max_new_tokens=40,
repetition_penalty=1.08,
repetition_penalty_last_n=1024,
stop_words=['<EOL>'])
model = AutoModel.from_pretrained("andri-jpg/gpt2-ChatKobi-ai",model_file="gpt2-medium-healthbot-AI-ggjt.bin", session_config=session_config)
print(model.generate("pertanyaan : apa itu diabetes jawaban :", generation_config=generation_config))
```
### References
The base model used for fine-tuning can be found at: https://huggingface.co/indonesian-nlp/gpt2-medium-indonesian.
### Potential Bias
We strive to reduce potential bias in this model through careful fine-tuning processes. However, natural language processing models may reflect biases present in training data. We encourage users to always be critical and obtain medical information from trusted sources.
### Developers and Contributions
You can contribute to the development of this model by participating in the GitHub repository https://github.com/andri-jpg/ChatKobi.AI Feel free to raise issues or submit pull requests to help enhance the quality and functionality of the model.
|
Prarthana5905/q-FrozenLake-v1-4x4-noSlippery
|
Prarthana5905
| 2023-10-03T11:42:11Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T11:42:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Prarthana5905/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bhavyabafna/q-FrozenLake-v1-4x4-noSlippery
|
bhavyabafna
| 2023-10-03T11:40:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T11:40:56Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="bhavyabafna/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
shettymanya/q-FrozenLake-v1-4x4-noSlippery
|
shettymanya
| 2023-10-03T11:40:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T11:40:13Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="shettymanya/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ
|
TheBloke
| 2023-10-03T11:28:53Z | 625 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"base_model:TinyLlama/TinyLlama-1.1B-python-v0.1",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-python-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-10-03T11:10:18Z |
---
base_model: PY007/TinyLlama-1.1B-python-v0.1
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
inference: false
language:
- en
license: apache-2.0
model_creator: Zhang Peiyuan
model_name: TinyLlama 1.1B Python v0.1
model_type: tinyllama
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# TinyLlama 1.1B Python v0.1 - GPTQ
- Model creator: [Zhang Peiyuan](https://huggingface.co/PY007)
- Original model: [TinyLlama 1.1B Python v0.1](https://huggingface.co/PY007/TinyLlama-1.1B-python-v0.1)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Zhang Peiyuan's TinyLlama 1.1B Python v0.1](https://huggingface.co/PY007/TinyLlama-1.1B-python-v0.1).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GGUF)
* [Zhang Peiyuan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PY007/TinyLlama-1.1B-python-v0.1)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 0.77 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 0.82 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 1.23 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 1.26 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 1.32 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 2048 | 0.79 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `TinyLlama-1.1B-python-v0.1-GPTQ`:
```shell
mkdir TinyLlama-1.1B-python-v0.1-GPTQ
huggingface-cli download TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ --local-dir TinyLlama-1.1B-python-v0.1-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir TinyLlama-1.1B-python-v0.1-GPTQ
huggingface-cli download TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir TinyLlama-1.1B-python-v0.1-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir TinyLlama-1.1B-python-v0.1-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ --local-dir TinyLlama-1.1B-python-v0.1-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `TinyLlama-1.1B-python-v0.1-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/TinyLlama-1.1B-python-v0.1-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Zhang Peiyuan's TinyLlama 1.1B Python v0.1
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is a code LM finetuned(or so-called continue pretrianed) from the 500B TinyLlama checkpoint with another 7B Python data from the starcoderdata.
**While the finetuning data is exclusively Python, the model retains its ability in many other languages such as C or Java**.
The HumanEval accuracy is **14**.
**It can be used as the draft model to speculative-decode larger models such as models in the CodeLlama family**.
|
MartinRV/trips_nonempty_bl1m.csv
|
MartinRV
| 2023-10-03T11:28:50Z | 0 | 0 |
mlconsole
|
[
"mlconsole",
"tabular-classification",
"dataset:trips_nonempty_bl1m.csv",
"license:unknown",
"model-index",
"region:us"
] |
tabular-classification
| 2023-10-03T11:28:47Z |
---
license: unknown
inference: false
tags:
- mlconsole
- tabular-classification
library_name: mlconsole
metrics:
- accuracy
- loss
datasets:
- trips_nonempty_bl1m.csv
model-index:
- name: trips_nonempty_bl1m.csv
results:
- task:
type: tabular-classification
name: tabular-classification
dataset:
type: trips_nonempty_bl1m.csv
name: trips_nonempty_bl1m.csv
metrics:
- type: accuracy
name: Accuracy
value: 0.8160799741744995
- type: loss
name: Model loss
value: 0.4906497895717621
---
# classification model trained on "trips_nonempty_bl1m.csv"
🤖 [Load and use this model](https://mlconsole.com/model/hf/MartinRV/trips_nonempty_bl1m.csv) in one click.
🧑💻 [Train your own model](https://mlconsole.com) on ML Console.
|
AdityaHR/q-FrozenLake-v1-4x4-noSlippery
|
AdityaHR
| 2023-10-03T11:27:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T11:27:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AdityaHR/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AparnaMahajan/Llama2_custom2
|
AparnaMahajan
| 2023-10-03T11:19:53Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T11:19:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ
|
TheBloke
| 2023-10-03T11:07:41Z | 75,873 | 9 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-10-03T11:01:00Z |
---
base_model: PY007/TinyLlama-1.1B-Chat-v0.3
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25
inference: false
language:
- en
license: apache-2.0
model_creator: Zhang Peiyuan
model_name: TinyLlama 1.1B Chat v0.3
model_type: tinyllama
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# TinyLlama 1.1B Chat v0.3 - GPTQ
- Model creator: [Zhang Peiyuan](https://huggingface.co/PY007)
- Original model: [TinyLlama 1.1B Chat v0.3](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Zhang Peiyuan's TinyLlama 1.1B Chat v0.3](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF)
* [Zhang Peiyuan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 0.77 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 0.82 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 1.23 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 1.26 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 1.32 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 0.79 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `TinyLlama-1.1B-Chat-v0.3-GPTQ`:
```shell
mkdir TinyLlama-1.1B-Chat-v0.3-GPTQ
huggingface-cli download TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ --local-dir TinyLlama-1.1B-Chat-v0.3-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir TinyLlama-1.1B-Chat-v0.3-GPTQ
huggingface-cli download TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir TinyLlama-1.1B-Chat-v0.3-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir TinyLlama-1.1B-Chat-v0.3-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ --local-dir TinyLlama-1.1B-Chat-v0.3-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `TinyLlama-1.1B-Chat-v0.3-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Zhang Peiyuan's TinyLlama 1.1B Chat v0.3
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [PY007/TinyLlama-1.1B-intermediate-step-480k-1T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T).
The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25) following the [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format.
#### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```
from transformers import AutoTokenizer
import transformers
import torch
model = "PY007/TinyLlama-1.1B-Chat-v0.3"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = "How to get in a good university?"
formatted_prompt = (
f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
)
sequences = pipeline(
formatted_prompt,
do_sample=True,
top_k=50,
top_p = 0.9,
num_return_sequences=1,
repetition_penalty=1.1,
max_new_tokens=1024,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
|
TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF
|
TheBloke
| 2023-10-03T11:05:25Z | 7,508 | 47 |
transformers
|
[
"transformers",
"gguf",
"tinyllama",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2023-10-03T11:01:20Z |
---
base_model: PY007/TinyLlama-1.1B-Chat-v0.3
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25
inference: false
language:
- en
license: apache-2.0
model_creator: Zhang Peiyuan
model_name: TinyLlama 1.1B Chat v0.3
model_type: tinyllama
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# TinyLlama 1.1B Chat v0.3 - GGUF
- Model creator: [Zhang Peiyuan](https://huggingface.co/PY007)
- Original model: [TinyLlama 1.1B Chat v0.3](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Zhang Peiyuan's TinyLlama 1.1B Chat v0.3](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF)
* [Zhang Peiyuan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [tinyllama-1.1b-chat-v0.3.Q2_K.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/blob/main/tinyllama-1.1b-chat-v0.3.Q2_K.gguf) | Q2_K | 2 | 0.48 GB| 2.98 GB | smallest, significant quality loss - not recommended for most purposes |
| [tinyllama-1.1b-chat-v0.3.Q3_K_S.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/blob/main/tinyllama-1.1b-chat-v0.3.Q3_K_S.gguf) | Q3_K_S | 3 | 0.50 GB| 3.00 GB | very small, high quality loss |
| [tinyllama-1.1b-chat-v0.3.Q3_K_M.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/blob/main/tinyllama-1.1b-chat-v0.3.Q3_K_M.gguf) | Q3_K_M | 3 | 0.55 GB| 3.05 GB | very small, high quality loss |
| [tinyllama-1.1b-chat-v0.3.Q3_K_L.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/blob/main/tinyllama-1.1b-chat-v0.3.Q3_K_L.gguf) | Q3_K_L | 3 | 0.59 GB| 3.09 GB | small, substantial quality loss |
| [tinyllama-1.1b-chat-v0.3.Q4_0.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/blob/main/tinyllama-1.1b-chat-v0.3.Q4_0.gguf) | Q4_0 | 4 | 0.64 GB| 3.14 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [tinyllama-1.1b-chat-v0.3.Q4_K_S.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/blob/main/tinyllama-1.1b-chat-v0.3.Q4_K_S.gguf) | Q4_K_S | 4 | 0.64 GB| 3.14 GB | small, greater quality loss |
| [tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/blob/main/tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf) | Q4_K_M | 4 | 0.67 GB| 3.17 GB | medium, balanced quality - recommended |
| [tinyllama-1.1b-chat-v0.3.Q5_0.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/blob/main/tinyllama-1.1b-chat-v0.3.Q5_0.gguf) | Q5_0 | 5 | 0.77 GB| 3.27 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [tinyllama-1.1b-chat-v0.3.Q5_K_S.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/blob/main/tinyllama-1.1b-chat-v0.3.Q5_K_S.gguf) | Q5_K_S | 5 | 0.77 GB| 3.27 GB | large, low quality loss - recommended |
| [tinyllama-1.1b-chat-v0.3.Q5_K_M.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/blob/main/tinyllama-1.1b-chat-v0.3.Q5_K_M.gguf) | Q5_K_M | 5 | 0.78 GB| 3.28 GB | large, very low quality loss - recommended |
| [tinyllama-1.1b-chat-v0.3.Q6_K.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/blob/main/tinyllama-1.1b-chat-v0.3.Q6_K.gguf) | Q6_K | 6 | 0.90 GB| 3.40 GB | very large, extremely low quality loss |
| [tinyllama-1.1b-chat-v0.3.Q8_0.gguf](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF/blob/main/tinyllama-1.1b-chat-v0.3.Q8_0.gguf) | Q8_0 | 8 | 1.17 GB| 3.67 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF and below it, a specific filename to download, such as: tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/TinyLlama-1.1B-Chat-v0.3-GGUF", model_file="tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf", model_type="tinyllama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Zhang Peiyuan's TinyLlama 1.1B Chat v0.3
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [PY007/TinyLlama-1.1B-intermediate-step-480k-1T](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-480k-1T).
The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25) following the [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format.
#### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```
from transformers import AutoTokenizer
import transformers
import torch
model = "PY007/TinyLlama-1.1B-Chat-v0.3"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = "How to get in a good university?"
formatted_prompt = (
f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
)
sequences = pipeline(
formatted_prompt,
do_sample=True,
top_k=50,
top_p = 0.9,
num_return_sequences=1,
repetition_penalty=1.1,
max_new_tokens=1024,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
<!-- original-model-card end -->
|
franjamonga/speakerverification_en
|
franjamonga
| 2023-10-03T10:32:47Z | 3 | 4 |
nemo
|
[
"nemo",
"speaker-recognition",
"speech",
"audio",
"speaker-verification",
"titanet",
"speaker-diarization",
"NeMo",
"pytorch",
"en",
"license:cc-by-4.0",
"model-index",
"region:us"
] | null | 2023-10-03T09:56:04Z |
---
language:
- en
license: cc-by-4.0
library_name: nemo
tags:
- speaker-recognition
- speech
- audio
- speaker-verification
- titanet
- speaker-diarization
- NeMo
- pytorch
datasets:
- librispeech_asr
- VOXCCELEB-1
- VOXCCELEB-2
- FISHER
- Switchboard
- SRE(2004-2010)
model-index:
- name: speakerverification_en
results:
- task:
name: Speaker Verification
type: speaker-verification
dataset:
name: voxceleb1
type: voxceleb1-O
config: clean
split: test
args:
language: en
metrics:
- name: Test EER
type: eer
value: 0.66
- task:
type: Speaker Diarization
name: speaker-diarization
dataset:
name: ami-mixheadset
type: ami_diarization
config: oracle-vad-known-number-of-speakers
split: test
args:
language: en
metrics:
- name: Test DER
type: der
value: 1.73
- task:
type: Speaker Diarization
name: speaker-diarization
dataset:
name: ami-lapel
type: ami_diarization
config: oracle-vad-known-number-of-speakers
split: test
args:
language: en
metrics:
- name: Test DER
type: der
value: 2.03
- task:
type: Speaker Diarization
name: speaker-diarization
dataset:
name: ch109
type: callhome_diarization
config: oracle-vad-known-number-of-speakers
split: test
args:
language: en
metrics:
- name: Test DER
type: der
value: 1.19
- task:
type: Speaker Diarization
name: speaker-diarization
dataset:
name: nist-sre-2000
type: nist-sre_diarization
config: oracle-vad-known-number-of-speakers
split: test
args:
language: en
metrics:
- name: Test DER
type: der
value: 6.73
---
# Speaker Verification Model based on TitaNet-Large (en-US)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
## Model Overview
This model extracts speaker embeddings from given speech, which is the backbone for speaker verification and diarization tasks.
It is a "large" version of TitaNet (around 23M parameters) models.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user
## How to Use this Model
The model is available for use in the NeMo toolkit [3] and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
speaker_model = nemo_asr.models.EncDecSpeakerLabelModel.from_pretrained("nvidia/speakerverification_en_titanet_large")
```
### Embedding Extraction
Using
```python
emb = speaker_model.get_embedding("an255-fash-b.wav")
```
### Verifying two utterances (Speaker Verification)
Now to check if two audio files are from the same speaker or not, simply do:
```python
speaker_model.verify_speakers("an255-fash-b.wav","cen7-fash-b.wav")
```
### Extracting Embeddings for more audio files
To extract embeddings from a bunch of audio files:
Write audio files to a `manifest.json` file with lines as in format:
```json
{"audio_filepath": "<absolute path to dataset>/audio_file.wav", "duration": "duration of file in sec", "label": "speaker_id"}
```
Then running following script will extract embeddings and writes to current working directory:
```shell
python <NeMo_root>/examples/speaker_tasks/recognition/extract_speaker_embeddings.py --manifest=manifest.json
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides speaker embeddings for an audio file.
## Model Architecture
TitaNet model is a depth-wise separable conv1D model [1] for Speaker Verification and diarization tasks. You may find more info on the detail of this model here: [TitaNet-Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/models.html).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/speaker_tasks/recognition/speaker_reco.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/speaker_tasks/recognition/conf/titanet-large.yaml).
### Datasets
All the models in this collection are trained on a composite dataset comprising several thousand hours of English speech:
- Voxceleb-1
- Voxceleb-2
- Fisher
- Switchboard
- Librispeech
- SRE (2004-2010)
## Performance
Performances of the these models are reported in terms of Equal Error Rate (EER%) on speaker verification evaluation trial files and as Diarization Error Rate (DER%) on diarization test sessions.
* Speaker Verification (EER%)
| Version | Model | Model Size | VoxCeleb1 (Cleaned trial file) |
|---------|--------------|-----|---------------|
| 1.10.0 | TitaNet-Large | 23M | 0.66 |
* Speaker Diarization (DER%)
| Version | Model | Model Size | Evaluation Condition | NIST SRE 2000 | AMI (Lapel) | AMI (MixHeadset) | CH109 |
|---------|--------------|-----|----------------------|---------------|-------------|------------------|-------|
| 1.10.0 | TitaNet-Large | 23M | Oracle VAD KNOWN # of Speakers | 6.73 | 2.03 | 1.73 | 1.19 |
| 1.10.0 | TitaNet-Large | 23M | Oracle VAD UNKNOWN # of Speakers | 5.38 | 2.03 | 1.89 | 1.63 |
## Limitations
This model is trained on both telephonic and non-telephonic speech from voxceleb datasets, Fisher and switch board. If your domain of data differs from trained data or doesnot show relatively good performance consider finetuning for that speech domain.
## NVIDIA Riva: Deployment
[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
[1] [TitaNet: Neural Model for Speaker Representation with 1D Depth-wise Separable convolutions and global context](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9746806)
[2] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
hansin91/scene_classification
|
hansin91
| 2023-10-03T10:19:39Z | 196 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:indoor-scene-classification",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-02T16:54:11Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- indoor-scene-classification
metrics:
- accuracy
model-index:
- name: scene_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: indoor-scene-classification
type: indoor-scene-classification
config: full
split: test
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.8491655969191271
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scene_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indoor-scene-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6106
- Accuracy: 0.8492
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.3172 | 1.0 | 341 | 2.8572 | 0.5109 |
| 2.2254 | 2.0 | 682 | 2.1453 | 0.6329 |
| 1.6202 | 3.0 | 1023 | 1.6283 | 0.7336 |
| 1.2313 | 4.0 | 1364 | 1.3402 | 0.7599 |
| 0.9576 | 5.0 | 1705 | 1.1237 | 0.8010 |
| 0.7654 | 6.0 | 2046 | 1.0270 | 0.8023 |
| 0.6416 | 7.0 | 2387 | 0.8848 | 0.8171 |
| 0.5353 | 8.0 | 2728 | 0.8381 | 0.8087 |
| 0.4516 | 9.0 | 3069 | 0.7570 | 0.8254 |
| 0.3925 | 10.0 | 3410 | 0.6667 | 0.8524 |
| 0.3453 | 11.0 | 3751 | 0.7583 | 0.8164 |
| 0.2944 | 12.0 | 4092 | 0.6783 | 0.8350 |
| 0.294 | 13.0 | 4433 | 0.7128 | 0.8312 |
| 0.2507 | 14.0 | 4774 | 0.6632 | 0.8331 |
| 0.2355 | 15.0 | 5115 | 0.6730 | 0.8421 |
| 0.2267 | 16.0 | 5456 | 0.6572 | 0.8357 |
| 0.2032 | 17.0 | 5797 | 0.7058 | 0.8280 |
| 0.1908 | 18.0 | 6138 | 0.6374 | 0.8485 |
| 0.1857 | 19.0 | 6479 | 0.6831 | 0.8312 |
| 0.1727 | 20.0 | 6820 | 0.6961 | 0.8254 |
| 0.1692 | 21.0 | 7161 | 0.6306 | 0.8402 |
| 0.1642 | 22.0 | 7502 | 0.6291 | 0.8485 |
| 0.1618 | 23.0 | 7843 | 0.6058 | 0.8582 |
| 0.1593 | 24.0 | 8184 | 0.6780 | 0.8389 |
| 0.1399 | 25.0 | 8525 | 0.6330 | 0.8485 |
| 0.1373 | 26.0 | 8866 | 0.6550 | 0.8408 |
| 0.1334 | 27.0 | 9207 | 0.6857 | 0.8421 |
| 0.1388 | 28.0 | 9548 | 0.6338 | 0.8415 |
| 0.1423 | 29.0 | 9889 | 0.6272 | 0.8517 |
| 0.1288 | 30.0 | 10230 | 0.6409 | 0.8556 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
bdpc/resnet101_rvl-cdip-_rvl_cdip-NK1000__CEKD_t2.5_a0.5
|
bdpc
| 2023-10-03T10:18:24Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-03T07:57:33Z |
---
license: apache-2.0
base_model: microsoft/resnet-50
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet101_rvl-cdip-_rvl_cdip-NK1000__CEKD_t2.5_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101_rvl-cdip-_rvl_cdip-NK1000__CEKD_t2.5_a0.5
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6065
- Accuracy: 0.7915
- Brier Loss: 0.3054
- Nll: 1.9957
- F1 Micro: 0.7915
- F1 Macro: 0.7910
- Ece: 0.0453
- Aurc: 0.0607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 250 | 4.1565 | 0.1378 | 0.9318 | 7.9039 | 0.1378 | 0.1073 | 0.0673 | 0.8326 |
| 4.1485 | 2.0 | 500 | 3.6932 | 0.3235 | 0.8832 | 5.1525 | 0.3235 | 0.2725 | 0.2044 | 0.5507 |
| 4.1485 | 3.0 | 750 | 2.3374 | 0.4725 | 0.6611 | 3.3127 | 0.4725 | 0.4311 | 0.0839 | 0.2921 |
| 2.392 | 4.0 | 1000 | 1.6516 | 0.588 | 0.5470 | 2.8681 | 0.588 | 0.5789 | 0.0620 | 0.1929 |
| 2.392 | 5.0 | 1250 | 1.3260 | 0.6488 | 0.4782 | 2.6378 | 0.6488 | 0.6444 | 0.0486 | 0.1458 |
| 1.1422 | 6.0 | 1500 | 1.0390 | 0.702 | 0.4156 | 2.4086 | 0.702 | 0.7029 | 0.0576 | 0.1097 |
| 1.1422 | 7.0 | 1750 | 0.8420 | 0.7288 | 0.3738 | 2.2222 | 0.7288 | 0.7300 | 0.0553 | 0.0888 |
| 0.708 | 8.0 | 2000 | 0.7753 | 0.7398 | 0.3586 | 2.1518 | 0.7398 | 0.7396 | 0.0587 | 0.0826 |
| 0.708 | 9.0 | 2250 | 0.7797 | 0.7462 | 0.3580 | 2.1095 | 0.7462 | 0.7457 | 0.0581 | 0.0820 |
| 0.5195 | 10.0 | 2500 | 0.7101 | 0.7602 | 0.3404 | 2.0711 | 0.7602 | 0.7612 | 0.0473 | 0.0733 |
| 0.5195 | 11.0 | 2750 | 0.6971 | 0.7645 | 0.3338 | 2.0649 | 0.7645 | 0.7653 | 0.0541 | 0.0715 |
| 0.4176 | 12.0 | 3000 | 0.6936 | 0.7712 | 0.3302 | 2.0265 | 0.7712 | 0.7708 | 0.0515 | 0.0702 |
| 0.4176 | 13.0 | 3250 | 0.6991 | 0.7662 | 0.3346 | 2.0582 | 0.7663 | 0.7657 | 0.0581 | 0.0723 |
| 0.3573 | 14.0 | 3500 | 0.6672 | 0.7722 | 0.3246 | 2.0053 | 0.7722 | 0.7723 | 0.0551 | 0.0683 |
| 0.3573 | 15.0 | 3750 | 0.6735 | 0.777 | 0.3244 | 2.0387 | 0.777 | 0.7782 | 0.0488 | 0.0671 |
| 0.3193 | 16.0 | 4000 | 0.6567 | 0.776 | 0.3216 | 2.0256 | 0.776 | 0.7773 | 0.0499 | 0.0678 |
| 0.3193 | 17.0 | 4250 | 0.6498 | 0.78 | 0.3184 | 1.9865 | 0.78 | 0.7802 | 0.0477 | 0.0662 |
| 0.2893 | 18.0 | 4500 | 0.6763 | 0.7755 | 0.3264 | 2.0844 | 0.7755 | 0.7755 | 0.0531 | 0.0697 |
| 0.2893 | 19.0 | 4750 | 0.6519 | 0.7815 | 0.3183 | 2.0458 | 0.7815 | 0.7817 | 0.0513 | 0.0658 |
| 0.271 | 20.0 | 5000 | 0.6432 | 0.7823 | 0.3147 | 2.0291 | 0.7823 | 0.7827 | 0.0440 | 0.0645 |
| 0.271 | 21.0 | 5250 | 0.6456 | 0.781 | 0.3156 | 2.0493 | 0.7810 | 0.7813 | 0.0487 | 0.0652 |
| 0.2516 | 22.0 | 5500 | 0.6336 | 0.7823 | 0.3144 | 1.9829 | 0.7823 | 0.7822 | 0.0522 | 0.0642 |
| 0.2516 | 23.0 | 5750 | 0.6333 | 0.7837 | 0.3128 | 2.0196 | 0.7837 | 0.7836 | 0.0492 | 0.0641 |
| 0.2397 | 24.0 | 6000 | 0.6337 | 0.7817 | 0.3147 | 2.0180 | 0.7817 | 0.7815 | 0.0494 | 0.0644 |
| 0.2397 | 25.0 | 6250 | 0.6347 | 0.7857 | 0.3145 | 2.0187 | 0.7857 | 0.7856 | 0.0510 | 0.0641 |
| 0.23 | 26.0 | 6500 | 0.6311 | 0.7815 | 0.3129 | 2.0132 | 0.7815 | 0.7819 | 0.0495 | 0.0637 |
| 0.23 | 27.0 | 6750 | 0.6329 | 0.7853 | 0.3125 | 2.0708 | 0.7853 | 0.7852 | 0.0502 | 0.0635 |
| 0.2191 | 28.0 | 7000 | 0.6222 | 0.786 | 0.3109 | 2.0022 | 0.786 | 0.7856 | 0.0483 | 0.0638 |
| 0.2191 | 29.0 | 7250 | 0.6195 | 0.7863 | 0.3096 | 2.0028 | 0.7863 | 0.7859 | 0.0550 | 0.0620 |
| 0.2155 | 30.0 | 7500 | 0.6196 | 0.7883 | 0.3090 | 1.9972 | 0.7883 | 0.7883 | 0.0486 | 0.0624 |
| 0.2155 | 31.0 | 7750 | 0.6167 | 0.787 | 0.3080 | 2.0173 | 0.787 | 0.7871 | 0.0443 | 0.0623 |
| 0.2074 | 32.0 | 8000 | 0.6143 | 0.7897 | 0.3073 | 2.0223 | 0.7897 | 0.7893 | 0.0443 | 0.0614 |
| 0.2074 | 33.0 | 8250 | 0.6123 | 0.787 | 0.3078 | 1.9869 | 0.787 | 0.7866 | 0.0458 | 0.0619 |
| 0.2028 | 34.0 | 8500 | 0.6137 | 0.7873 | 0.3070 | 1.9883 | 0.7873 | 0.7868 | 0.0457 | 0.0623 |
| 0.2028 | 35.0 | 8750 | 0.6152 | 0.786 | 0.3085 | 2.0108 | 0.786 | 0.7863 | 0.0497 | 0.0626 |
| 0.1982 | 36.0 | 9000 | 0.6133 | 0.7863 | 0.3077 | 2.0205 | 0.7863 | 0.7862 | 0.0515 | 0.0615 |
| 0.1982 | 37.0 | 9250 | 0.6145 | 0.7877 | 0.3081 | 1.9930 | 0.7877 | 0.7879 | 0.0444 | 0.0621 |
| 0.1948 | 38.0 | 9500 | 0.6116 | 0.7857 | 0.3078 | 2.0072 | 0.7857 | 0.7854 | 0.0508 | 0.0619 |
| 0.1948 | 39.0 | 9750 | 0.6090 | 0.788 | 0.3059 | 1.9954 | 0.788 | 0.7882 | 0.0430 | 0.0614 |
| 0.1933 | 40.0 | 10000 | 0.6143 | 0.7897 | 0.3072 | 1.9943 | 0.7897 | 0.7899 | 0.0462 | 0.0618 |
| 0.1933 | 41.0 | 10250 | 0.6061 | 0.7887 | 0.3041 | 1.9900 | 0.7887 | 0.7889 | 0.0439 | 0.0606 |
| 0.1882 | 42.0 | 10500 | 0.6070 | 0.7865 | 0.3058 | 1.9907 | 0.7865 | 0.7868 | 0.0438 | 0.0607 |
| 0.1882 | 43.0 | 10750 | 0.6083 | 0.788 | 0.3054 | 2.0095 | 0.788 | 0.7877 | 0.0489 | 0.0608 |
| 0.1871 | 44.0 | 11000 | 0.6083 | 0.787 | 0.3054 | 1.9828 | 0.787 | 0.7872 | 0.0469 | 0.0607 |
| 0.1871 | 45.0 | 11250 | 0.6092 | 0.7893 | 0.3057 | 2.0140 | 0.7893 | 0.7891 | 0.0483 | 0.0608 |
| 0.1862 | 46.0 | 11500 | 0.6057 | 0.7893 | 0.3053 | 2.0064 | 0.7893 | 0.7890 | 0.0450 | 0.0609 |
| 0.1862 | 47.0 | 11750 | 0.6042 | 0.79 | 0.3044 | 1.9691 | 0.79 | 0.7899 | 0.0435 | 0.0607 |
| 0.1845 | 48.0 | 12000 | 0.6068 | 0.79 | 0.3053 | 2.0052 | 0.79 | 0.7899 | 0.0438 | 0.0608 |
| 0.1845 | 49.0 | 12250 | 0.6081 | 0.7893 | 0.3062 | 2.0117 | 0.7893 | 0.7890 | 0.0485 | 0.0612 |
| 0.1836 | 50.0 | 12500 | 0.6065 | 0.7915 | 0.3054 | 1.9957 | 0.7915 | 0.7910 | 0.0453 | 0.0607 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
pavithrav/distilbert-base-uncased-finetuned-own-data
|
pavithrav
| 2023-10-03T10:15:20Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-03T10:14:46Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-own-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-own-data
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7553
- Accuracy: 0.9895
- F1: 0.9895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 33 | 1.1395 | 0.8289 | 0.8214 |
| No log | 2.0 | 66 | 0.7553 | 0.9895 | 0.9895 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
devanshb26/llama-7b-qlora-closed_qa_1
|
devanshb26
| 2023-10-03T10:15:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T10:14:58Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
SergeyKazulin/my_awesome_model
|
SergeyKazulin
| 2023-10-03T09:55:03Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-03T06:06:31Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: SergeyKazulin/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SergeyKazulin/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0620
- Validation Loss: 0.2407
- Train Accuracy: 0.9304
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2526 | 0.1833 | 0.9287 | 0 |
| 0.1326 | 0.2039 | 0.9262 | 1 |
| 0.0620 | 0.2407 | 0.9304 | 2 |
### Framework versions
- Transformers 4.33.3
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
ranajithore/stable-diffusion-v2-1-trained-for-plant-cell-structure-diagram-without-captions
|
ranajithore
| 2023-10-03T09:50:19Z | 28 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-03T09:45:26Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### stable-diffusion-v2.1-trained-for-plant-cell-structure-diagram-without-captions Dreambooth model trained by ranajithore with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
IWR/ppo-Pyramids
|
IWR
| 2023-10-03T09:23:48Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-10-03T09:23:46Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: IWR/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
erkam/sg2im-256-bs-16x2-cc-snr-const
|
erkam
| 2023-10-03T09:17:37Z | 1 | 0 |
diffusers
|
[
"diffusers",
"sg-to-image",
"scene-graph",
"stable-diffusion",
"stable-diffusion-diffusers",
"lora",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-01T02:01:43Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
tags:
- sg-to-image
- scene-graph
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - erkam/sg2im-256-bs-16x2-cc-snr-const
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the erkam/clevr-full-v5 dataset. You can find some example images in the following.
|
s3nh/AtAndDev-ShortKing-3b-v0.2-GGUF
|
s3nh
| 2023-10-03T09:13:32Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T09:08:50Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/AtAndDev/ShortKing-3b-v0.2).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
Quacktab/Reinforce-CartPole-v1
|
Quacktab
| 2023-10-03T09:01:34Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T09:00:27Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 478.98 +/- 61.33
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Pavan27/NER_Telugu_01
|
Pavan27
| 2023-10-03T08:59:41Z | 169 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"telugu",
"NER",
"TeluguNER",
"te",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-01T09:57:19Z |
---
language:
- te
- en
tags:
- telugu
- NER
- TeluguNER
---
## Direct Use
The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.
## Downstream Use
Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face [token classification docs](https://huggingface.co/tasks/token-classification).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
**CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.**
```python
>>> from transformers import pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("Pavan27/NER_Telugu_01")
>>> model = AutoModelForTokenClassification.from_pretrained("Pavan27/NER_Telugu_01")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities = True)
>>> classifier("వెస్టిండీస్పై పోర్ట్ ఆఫ్ స్పెయిన్ వేదిక జరుగుతున్న రెండో టెస్టు తొలి ఇన్నింగ్స్లో విరాట్ కోహ్లీ 121 పరుగులతో విదేశాల్లో సెంచరీ కరువును తీర్చుకున్నాడు.")
[{'entity_group': 'LOC',
'score': 0.9999062,
'word': 'వెస్టిండీస్',
'start': 0,
'end': 11},
{'entity_group': 'LOC',
'score': 0.9998613,
'word': 'పోర్ట్ ఆఫ్ స్పెయిన్',
'start': 15,
'end': 34},
{'entity_group': 'PER',
'score': 0.99996054,
'word': 'విరాట్ కోహ్లీ',
'start': 85,
'end': 98}]
```
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
|
shengqin/bloom-prefix-tuning
|
shengqin
| 2023-10-03T08:58:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T01:27:29Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
DopeorNope/CoLA_L-7b
|
DopeorNope
| 2023-10-03T08:55:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T08:55:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
s3nh/haoranxu-ALMA-7B-GGUF
|
s3nh
| 2023-10-03T08:42:43Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-03T08:36:22Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/haoranxu/ALMA-7B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
Shwifty/videomae-base-finetuned-ucf101-subset
|
Shwifty
| 2023-10-03T08:35:24Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-09-27T17:19:38Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 750
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
binhquoc/lora-med2lab
|
binhquoc
| 2023-10-03T08:33:41Z | 7 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-13T09:09:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
alessaww/pets_classification_gradio
|
alessaww
| 2023-10-03T08:17:12Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-10-03T08:06:20Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sungnyun/openssl-simcore
|
sungnyun
| 2023-10-03T07:56:52Z | 0 | 2 | null |
[
"en",
"arxiv:2303.11101",
"license:apache-2.0",
"region:us"
] | null | 2023-10-03T03:29:22Z |
---
license: apache-2.0
language:
- en
---
<br>
# Model Card for OpenSSL-SimCore
This repo contains some of the pretrained models from our paper, *Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning*.
We share the SimCore-pretrained models in the Open-set Self-Supervised Learning (OpenSSL) task, according to each fine-grained dataset.
SimCore significantly improves representation learning performance in various downstream tasks, by leveraging a coreset sampled from the unlabeled open-set.
## Model Details
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/sungnyun/openssl-simcore
- **Paper:** [**Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning**](https://arxiv.org/abs/2303.11101), S. Kim et al., CVPR 2023
### Model Type
SimCore with a stopping criterion (Table 2 in the main paper).
- Backbone: ResNet50
- SSL algorithm: SimCLR
- Open-set: ImageNet-1k
- Target dataset: aircraft, cars, pets, cub, dogs, flowers, stanford40, mit67, dtd, celeba, and food11
**For use:** Please check our github repo for the instructions.
**License:** Apache 2.0 License
**Where to send questions or comments about the model:** https://github.com/sungnyun/openssl-simcore/issues
|
ingeol/dpo_test_3000
|
ingeol
| 2023-10-03T07:51:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T07:51:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
nikitamalviya/squad-bloom-3b
|
nikitamalviya
| 2023-10-03T07:50:31Z | 2 | 0 |
peft
|
[
"peft",
"base_model:bigscience/bloom-3b",
"base_model:adapter:bigscience/bloom-3b",
"region:us"
] | null | 2023-08-26T12:54:40Z |
---
library_name: peft
base_model: bigscience/bloom-3b
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
ND911/Kitchen_Sink_3Dgn_XL
|
ND911
| 2023-10-03T07:46:57Z | 0 | 0 | null |
[
"SDXL Model",
"region:us"
] | null | 2023-10-03T00:26:27Z |
---
tags:
- SDXL Model
---
## Warning, Mostly NSFW
## A 3D for fun checkpoint, use freely as you want. Workflow included

|
dbaggi/my_awesome_mind_model
|
dbaggi
| 2023-10-03T07:12:36Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-29T13:27:47Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.061946902654867256
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6677
- Accuracy: 0.0619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6505 | 0.0442 |
| No log | 1.87 | 7 | 2.6501 | 0.0885 |
| 2.6323 | 2.93 | 11 | 2.6624 | 0.0796 |
| 2.6323 | 4.0 | 15 | 2.6582 | 0.0796 |
| 2.6323 | 4.8 | 18 | 2.6679 | 0.0796 |
| 2.6214 | 5.87 | 22 | 2.6703 | 0.0619 |
| 2.6214 | 6.93 | 26 | 2.6689 | 0.0619 |
| 2.6175 | 8.0 | 30 | 2.6677 | 0.0619 |
### Framework versions
- Transformers 4.33.3
- Pytorch 1.13.1+cu117
- Datasets 2.14.5
- Tokenizers 0.11.0
|
ravisarun/roberta-large-peft-p-tuning
|
ravisarun
| 2023-10-03T07:10:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T07:10:03Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
Sneka/distilbert-base-uncased-finetuned-squad
|
Sneka
| 2023-10-03T06:05:48Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-10-03T05:58:45Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 6.1645 |
| No log | 2.0 | 4 | 6.0155 |
| No log | 3.0 | 6 | 5.9510 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
abeiler/AlphaRep
|
abeiler
| 2023-10-03T06:04:12Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-28T00:45:02Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: goatAlphaRep-QLORA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goatAlphaRep-QLORA
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bdpc/resnet101_rvl-cdip
|
bdpc
| 2023-10-03T05:55:56Z | 195 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/resnet-101",
"base_model:finetune:microsoft/resnet-101",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-10-02T18:56:20Z |
---
license: apache-2.0
base_model: microsoft/resnet-101
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet101_rvl-cdip
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet101_rvl-cdip
This model is a fine-tuned version of [microsoft/resnet-101](https://huggingface.co/microsoft/resnet-101) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6158
- Accuracy: 0.8210
- Brier Loss: 0.2556
- Nll: 1.7696
- F1 Micro: 0.8210
- F1 Macro: 0.8209
- Ece: 0.0176
- Aurc: 0.0418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 1.3521 | 1.0 | 5000 | 1.2626 | 0.6133 | 0.5108 | 2.7262 | 0.6133 | 0.6042 | 0.0455 | 0.1644 |
| 0.942 | 2.0 | 10000 | 0.9005 | 0.7318 | 0.3723 | 2.2139 | 0.7318 | 0.7293 | 0.0174 | 0.0862 |
| 0.7983 | 3.0 | 15000 | 0.7691 | 0.7723 | 0.3198 | 2.0444 | 0.7723 | 0.7714 | 0.0139 | 0.0641 |
| 0.7167 | 4.0 | 20000 | 0.7048 | 0.7924 | 0.2931 | 1.9414 | 0.7924 | 0.7931 | 0.0135 | 0.0541 |
| 0.6656 | 5.0 | 25000 | 0.6658 | 0.8052 | 0.2770 | 1.8581 | 0.8052 | 0.8056 | 0.0108 | 0.0486 |
| 0.6252 | 6.0 | 30000 | 0.6415 | 0.8117 | 0.2670 | 1.8157 | 0.8117 | 0.8112 | 0.0128 | 0.0455 |
| 0.6038 | 7.0 | 35000 | 0.6269 | 0.8176 | 0.2607 | 1.7833 | 0.8176 | 0.8180 | 0.0144 | 0.0432 |
| 0.5784 | 8.0 | 40000 | 0.6217 | 0.8195 | 0.2583 | 1.7723 | 0.8195 | 0.8195 | 0.0151 | 0.0425 |
| 0.5583 | 9.0 | 45000 | 0.6150 | 0.8214 | 0.2553 | 1.7719 | 0.8214 | 0.8214 | 0.0164 | 0.0415 |
| 0.5519 | 10.0 | 50000 | 0.6158 | 0.8210 | 0.2556 | 1.7696 | 0.8210 | 0.8209 | 0.0176 | 0.0418 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231002
- Datasets 2.7.1
- Tokenizers 0.13.3
|
varunnayak3101/dqn-SpaceInvadersNoFrameskip
|
varunnayak3101
| 2023-10-03T05:41:37Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T05:40:59Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 637.00 +/- 159.86
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga varunnayak3101 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga varunnayak3101 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga varunnayak3101
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
AayushShah/SQL_Final_RunPod_Last
|
AayushShah
| 2023-10-03T05:32:40Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-10-02T18:35:52Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: SQL_Final_RunPod_Last
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SQL_Final_RunPod_Last
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0215
- Bleu: 44.256
- Gen Len: 18.9114
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.2055 | 0.12 | 1000 | 0.0917 | 42.8594 | 18.8938 |
| 0.1187 | 0.23 | 2000 | 0.0709 | 43.1637 | 18.8915 |
| 0.1007 | 0.35 | 3000 | 0.0602 | 43.4304 | 18.9088 |
| 0.0869 | 0.46 | 4000 | 0.0559 | 43.4636 | 18.8961 |
| 0.0792 | 0.58 | 5000 | 0.0497 | 43.5366 | 18.9063 |
| 0.0736 | 0.69 | 6000 | 0.0464 | 43.5769 | 18.9016 |
| 0.0672 | 0.81 | 7000 | 0.0435 | 43.7471 | 18.9068 |
| 0.0635 | 0.93 | 8000 | 0.0403 | 43.781 | 18.9073 |
| 0.0564 | 1.04 | 9000 | 0.0389 | 43.7054 | 18.9029 |
| 0.0493 | 1.16 | 10000 | 0.0376 | 43.8362 | 18.9063 |
| 0.0479 | 1.27 | 11000 | 0.0367 | 43.8514 | 18.9126 |
| 0.0465 | 1.39 | 12000 | 0.0350 | 43.8365 | 18.9078 |
| 0.0449 | 1.5 | 13000 | 0.0335 | 43.8878 | 18.9042 |
| 0.0419 | 1.62 | 14000 | 0.0324 | 43.9035 | 18.9075 |
| 0.0426 | 1.74 | 15000 | 0.0314 | 43.9272 | 18.906 |
| 0.0405 | 1.85 | 16000 | 0.0302 | 44.0143 | 18.9087 |
| 0.039 | 1.97 | 17000 | 0.0291 | 43.9392 | 18.9089 |
| 0.0327 | 2.08 | 18000 | 0.0286 | 44.0248 | 18.9087 |
| 0.0311 | 2.2 | 19000 | 0.0288 | 44.0732 | 18.9119 |
| 0.0302 | 2.31 | 20000 | 0.0282 | 44.061 | 18.9055 |
| 0.029 | 2.43 | 21000 | 0.0279 | 44.0681 | 18.9121 |
| 0.0297 | 2.55 | 22000 | 0.0267 | 44.0958 | 18.91 |
| 0.0284 | 2.66 | 23000 | 0.0259 | 44.1215 | 18.9121 |
| 0.0272 | 2.78 | 24000 | 0.0259 | 44.0752 | 18.9113 |
| 0.0273 | 2.89 | 25000 | 0.0253 | 44.1104 | 18.909 |
| 0.0265 | 3.01 | 26000 | 0.0253 | 44.1262 | 18.9095 |
| 0.0215 | 3.12 | 27000 | 0.0251 | 44.137 | 18.9119 |
| 0.0215 | 3.24 | 28000 | 0.0246 | 44.1382 | 18.9096 |
| 0.0215 | 3.36 | 29000 | 0.0244 | 44.1806 | 18.9088 |
| 0.0206 | 3.47 | 30000 | 0.0237 | 44.169 | 18.911 |
| 0.0202 | 3.59 | 31000 | 0.0243 | 44.1469 | 18.9096 |
| 0.0204 | 3.7 | 32000 | 0.0231 | 44.1405 | 18.9116 |
| 0.0193 | 3.82 | 33000 | 0.0230 | 44.1613 | 18.9116 |
| 0.0196 | 3.94 | 34000 | 0.0226 | 44.197 | 18.9117 |
| 0.0177 | 4.05 | 35000 | 0.0228 | 44.1942 | 18.9102 |
| 0.0155 | 4.17 | 36000 | 0.0230 | 44.2241 | 18.9118 |
| 0.0159 | 4.28 | 37000 | 0.0226 | 44.2219 | 18.9107 |
| 0.0151 | 4.4 | 38000 | 0.0221 | 44.212 | 18.912 |
| 0.0149 | 4.51 | 39000 | 0.0222 | 44.2743 | 18.9115 |
| 0.0154 | 4.63 | 40000 | 0.0216 | 44.2636 | 18.9121 |
| 0.0149 | 4.75 | 41000 | 0.0215 | 44.2805 | 18.913 |
| 0.0146 | 4.86 | 42000 | 0.0216 | 44.2681 | 18.9125 |
| 0.0145 | 4.98 | 43000 | 0.0215 | 44.256 | 18.9114 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
agustin228/image_classification
|
agustin228
| 2023-10-03T05:14:58Z | 195 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:pokemon-classification",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-14T08:05:48Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- pokemon-classification
metrics:
- accuracy
model-index:
- name: image_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: pokemon-classification
type: pokemon-classification
config: full
split: train[:4800]
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.8854166666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pokemon-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8072
- Accuracy: 0.8854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 240 | 2.0511 | 0.7427 |
| No log | 2.0 | 480 | 0.9657 | 0.8792 |
| 2.3005 | 3.0 | 720 | 0.8118 | 0.8833 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
snintendog/LGO_Eevee
|
snintendog
| 2023-10-03T05:00:18Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-10-03T04:15:21Z |
---
license: openrail
---
Trained off a 10 minute Dataset ripped form the Switch games. 300 Epoch. RVC v2 Rmvpe.
There is a noticible Vee and Vui Lisp Every so often. It likes High Pitches for Males Settling around +15-26 and Females around +6-12. Variable as always per voice.
|
flyover19/santacoder-finetuned-the-stack-bash
|
flyover19
| 2023-10-03T04:46:00Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:bigcode/santacoder",
"base_model:finetune:bigcode/santacoder",
"license:bigcode-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-29T21:48:31Z |
---
license: bigcode-openrail-m
base_model: bigcode/santacoder
tags:
- generated_from_trainer
model-index:
- name: santacoder-finetuned-the-stack-bash
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# santacoder-finetuned-the-stack-bash
This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2202
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7564 | 0.1 | 500 | 1.3213 |
| 1.6757 | 0.2 | 1000 | 4.5570 |
| 1.6668 | 0.3 | 1500 | 7.4934 |
| 0.4505 | 0.4 | 2000 | 0.4260 |
| 1.6604 | 0.5 | 2500 | 0.5150 |
| 1.6552 | 0.6 | 3000 | 0.5775 |
| 1.6481 | 0.7 | 3500 | 0.6173 |
| 1.656 | 0.8 | 4000 | 0.2171 |
| 1.6554 | 0.9 | 4500 | 0.2198 |
| 1.6563 | 1.0 | 5000 | 0.2202 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3
|
casque/mistoonAnime_v10
|
casque
| 2023-10-03T04:39:51Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-10-02T14:53:25Z |
---
license: creativeml-openrail-m
---
|
viethq188/llama2-chat-vi-gguf-q4_0
|
viethq188
| 2023-10-03T04:06:58Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-10-03T02:51:27Z |
## Description
This repo contains GGUF format model which is a quantization of the model: https://huggingface.co/ngoantech/Llama-2-7b-vietnamese-20k
# Inference Code Example (Langchain+Python)
```python
from langchain.llms import LlamaCpp
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.
User: Chào Bob.
Bob: Chào bạn. Tôi có thể giúp gì cho bạn?
User: Thủ đô của Việt Nam là thành phố nào?
Bob: Hà Nội là thủ đô của Việt Nam
User: {question}"""
# template = """<<SYS>>\nYou are a helpful assistant. Bạn là một trợ lí hữu ích.\n<</SYS>>\n\n[INST] {question} [/INST] """
# template = """[INST] <<SYS>>
# You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
# <</SYS>>
# {question} [/INST]
# """
prompt = PromptTemplate(template=template, input_variables=["question"])
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="/path/to/model/gguf-model-q4_0.bin",
temperature=0.1,
max_tokens=1024,
top_p=1,
callback_manager=callback_manager,
verbose=True, # Verbose is required to pass to the callback manager
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "Quốc khánh của Việt Nam diễn ra vào ngày nào?"
print(prompt.format(question=question))
llm_chain.run(question)
```
# Inference Code Example (Llama.cpp)
```bash
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp/ && make -j
./main -m /path/to/model/gguf-model-q4_0.bin --temp 0.1 -t 8 -n 1024 --color -p "VNG Corporation là công ty công nghệ hàng đầu "
./main -m /path/to/model/gguf-model-q4_0.bin --temp 0.1 -t 8 -n 1024 --color -r "User:" -f /path/to/chat/prompt/chat.txt
```
---
license: apache-2.0
---
|
nomsgadded/ppo-LunarLander-v2_test
|
nomsgadded
| 2023-10-03T04:01:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T04:01:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -57.79 +/- 10.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RingoDingo/MaidClassifier
|
RingoDingo
| 2023-10-03T04:00:28Z | 0 | 0 | null |
[
"license:gpl-2.0",
"region:us"
] | null | 2023-10-03T03:51:01Z |
---
license: gpl-2.0
---
Convolutional classifier model trained to classify "proper" victorian-style maids versus maid-style cosplays.
|
hinsane2/ppo-lunarlandernew
|
hinsane2
| 2023-10-03T03:57:58Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-10-03T03:57:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.51 +/- 20.46
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ND911/Kitchen_Sink_3D_Lora
|
ND911
| 2023-10-03T03:55:16Z | 0 | 0 | null |
[
"SDXL",
"Lora",
"region:us"
] | null | 2023-10-03T01:01:12Z |
---
tags:
- SDXL
- Lora
---
## A Lora for SDXL, use freely as you want. Workflow included

|
Charlie911/vicuna-7b-v1.5-lora-mixed-datasets
|
Charlie911
| 2023-10-03T03:43:35Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"region:us"
] | null | 2023-10-02T17:28:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
davolu/medical-illustration-heart
|
davolu
| 2023-10-03T03:26:56Z | 5 | 7 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-03T03:21:33Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Medical-illustration-(heart) Dreambooth model trained by David Oluyale with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
silvacarl/phi-1_5-safetensors
|
silvacarl
| 2023-10-03T03:21:06Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2023-10-03T03:21:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
Ui1236/Htc
|
Ui1236
| 2023-10-03T03:09:34Z | 0 | 0 |
allennlp
|
[
"allennlp",
"chemistry",
"biology",
"legal",
"summarization",
"ae",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] |
summarization
| 2023-10-03T03:07:55Z |
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- ae
metrics:
- bertscore
library_name: allennlp
pipeline_tag: summarization
tags:
- chemistry
- biology
- legal
---
|
happyterrylol/distilbert-base-uncased-finetuned-cola
|
happyterrylol
| 2023-10-03T03:03:18Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-27T09:24:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6766
- eval_matthews_correlation: -0.0464
- eval_runtime: 35.5548
- eval_samples_per_second: 29.335
- eval_steps_per_second: 1.856
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Chirag051/Taxi-v3
|
Chirag051
| 2023-10-03T02:52:47Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-03T20:15:46Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Chirag051/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zen-E/bert-mini-sentence-distil-unsupervised
|
zen-E
| 2023-10-03T02:43:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"dataset:ffgcc/NEWS5M",
"dataset:zen-E/NEWS5M-simcse-roberta-large-embeddings-pca-256",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-10-01T14:38:22Z |
---
datasets:
- ffgcc/NEWS5M
- zen-E/NEWS5M-simcse-roberta-large-embeddings-pca-256
language:
- en
metrics:
- pearsonr
- spearmanr
library_name: transformers
---
The model is trained by knowledge distillation between the "princeton-nlp/unsup-simcse-roberta-large" and "prajjwal1/bert-mini" on the 'ffgcc/NEWS5M'.
The model can perform inferenced by Automodel.
The model achieves 0.825 and 0.83 for pearsonr and spearmanr respectively on STS-b test dataset.
For more training detail, the training config and the pytorch forward function is as follows:
```python
config = {
'epoch' = 200,
'learning_rate' = 3e-4,
'batch_size' = 12288,
'temperature' = 0.05
}
```
```python
def forward_cos_mse_kd_unsup(self, sentences, teacher_sentence_embs):
"""forward function for the unsupervised News5M dataset"""
_, o = self.bert(**sentences)
# cosine similarity between the first half batch and the second half batch
half_batch = o.size(0) // 2
higher_half = half_batch * 2 #skip the last datapoint when the batch size number is odd
cos_sim = cosine_sim(o[:half_batch], o[half_batch:higher_half])
cos_sim_teacher = cosine_sim(teacher_sentence_embs[:half_batch], teacher_sentence_embs[half_batch:higher_half])
# KL Divergence between student and teacher probabilities
soft_teacher_probs = F.softmax(cos_sim_teacher / self.temperature, dim=1)
kd_contrastive_loss = F.kl_div(F.log_softmax(cos_sim / self.temperature, dim=1),
soft_teacher_probs,
reduction='batchmean')
# MSE loss
kd_mse_loss = nn.MSELoss()(o, teacher_sentence_embs)/3
# equal weight for the two losses
total_loss = kd_contrastive_loss*0.5 + kd_mse_loss*0.5
return total_loss, kd_contrastive_loss, kd_mse_loss
```
|
nunenuh/bloomz-560m_PROMPT_TUNING_CAUSAL_LM
|
nunenuh
| 2023-10-03T02:30:43Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-10-03T02:30:42Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
tylerkiser/ppo-SnowballTarget
|
tylerkiser
| 2023-10-03T02:21:39Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-10-01T15:59:49Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tylerkiser/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Frorozcol/lince_lora_instrction_food
|
Frorozcol
| 2023-10-03T02:20:25Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:clibrain/Llama-2-7b-ft-instruct-es",
"base_model:finetune:clibrain/Llama-2-7b-ft-instruct-es",
"license:apache-2.0",
"region:us"
] | null | 2023-10-02T14:36:47Z |
---
license: apache-2.0
base_model: clibrain/Llama-2-7b-ft-instruct-es
tags:
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [clibrain/Llama-2-7b-ft-instruct-es](https://huggingface.co/clibrain/Llama-2-7b-ft-instruct-es) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 500
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.