modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 00:41:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 00:40:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Fixedbot/Taxis-v3
|
Fixedbot
| 2023-07-13T11:36:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T11:18:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxis-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="Fixedbot/Taxis-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
Graphcore/mt5-small-ipu
|
Graphcore
| 2023-07-13T11:20:36Z | 4 | 0 | null |
[
"optimum_graphcore",
"arxiv:1910.10683",
"arxiv:2010.11934",
"license:apache-2.0",
"region:us"
] | null | 2023-05-19T15:01:28Z |
---
license: apache-2.0
---
# Graphcore/mt5-small-ipu
Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore).
Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
## Model description
Multilingual Text-to-Text Transfer Transformer (mT5) is the multilingual variant of [T5](https://arxiv.org/abs/1910.10683). T5 is a Transformer based model that uses a text-to-text approach for translation, question answering, and classification. It introduces an unified framework that converts all text-based language problems into a text-to-text format for transfer learning for NLP. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks.
mT5 is pretrained on the mC4 corpus, covering 101 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
Note: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
Paper link :[mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
## Intended uses & limitations
This model contains just the `IPUConfig` files for running the mT5 Small model (e.g. [HuggingFace/google/mt5-small](https://huggingface.co/google/mt5-small)) on Graphcore IPUs.
**This model contains no model weights, only an IPUConfig.**
## Usage
```
from optimum.graphcore import IPUConfig
ipu_config = IPUConfig.from_pretrained("Graphcore/mt5-small-ipu")
```
|
PraveenJesu/openai-whisper-medium-murf
|
PraveenJesu
| 2023-07-13T11:13:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T11:13:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
RushTurtle/crnn_vgg16_bn_20230713-111233
|
RushTurtle
| 2023-07-13T11:13:02Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-13T11:12:55Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_vgg16_bn",
"train_path": "/tmp/dataset/train3_1100/",
"val_path": "/tmp/dataset/val3_1100/",
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": null,
"epochs": 3,
"batch_size": 64,
"device": 0,
"input_size": 32,
"lr": 0.001,
"weight_decay": 0,
"workers": 16,
"resume": null,
"vocab": "french",
"test_only": false,
"show_samples": false,
"wb": false,
"push_to_hub": true,
"pretrained": false,
"sched": "cosine",
"amp": false,
"find_lr": false
}
|
Virch/q-Taxi-v3
|
Virch
| 2023-07-13T11:10:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T11:08:03Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Virch/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BlueSunflower/gpt2-medium-chess
|
BlueSunflower
| 2023-07-13T10:51:47Z | 188 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-30T14:13:46Z |
# Model description
GPT-2 medium finetuned on 8 million chess games (short algebraic notation)
Data: Chess DB + sample from lichess + sample from CCRL
Example context: "1-0 2700 1350 1.e4 e5 2.Nf3 Nc6" (white score-black score white_elo black_elo moves)
# Model results
- ELO (measured against Stockfish) ~ 1340
- % legal moves 98.5%
- checkmates in one move (from BigBench benchmark) - 46.5%
---
license: agpl-3.0
---
|
Virch/q-FrozenLake-v1-4x4-noSlippery
|
Virch
| 2023-07-13T10:51:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T10:43:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Virch/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jackyjuakers/home
|
jackyjuakers
| 2023-07-13T10:50:41Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-07-13T10:50:41Z |
---
license: bigcode-openrail-m
---
|
waliaMuskaan011/model5
|
waliaMuskaan011
| 2023-07-13T10:40:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-12T16:33:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
- tasks: automatic speech recognition
### Framework versions
- PEFT 0.4.0.dev0
|
imone/LLaMA_13B_with_EOT_token
|
imone
| 2023-07-13T10:40:28Z | 12 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-26T08:22:58Z |
---
license: other
language:
- en
pipeline_tag: text-generation
---
# LLaMA 13B with End-of-turn (EOT) Token
This is the LLaMA 13B model with `<|end_of_turn|>` token added as id `32000`. The token input/output embedding is initialized as the mean of all existing input/output token embeddings, respectively.
|
ivivnov/ppo-Huggy
|
ivivnov
| 2023-07-13T10:36:38Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-13T10:36:25Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ivivnov/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
madroid/autotrain-text-chat-74266139562
|
madroid
| 2023-07-13T10:25:06Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:madroid/autotrain-data-text-chat",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T10:24:08Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- madroid/autotrain-data-text-chat
co2_eq_emissions:
emissions: 0.3508472536259808
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 74266139562
- CO2 Emissions (in grams): 0.3508
## Validation Metrics
- Loss: 0.005
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/madroid/autotrain-text-chat-74266139562
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("madroid/autotrain-text-chat-74266139562", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("madroid/autotrain-text-chat-74266139562", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
arunboss/triage_R5_model
|
arunboss
| 2023-07-13T10:17:28Z | 218 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-13T03:02:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: triage_R5_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# triage_R5_model
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0123
- Accuracy: 0.6837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0452 | 1.0 | 159 | 1.9622 | 0.3814 |
| 1.7034 | 2.0 | 319 | 1.5695 | 0.4923 |
| 1.441 | 3.0 | 479 | 1.4427 | 0.5433 |
| 1.2908 | 4.0 | 639 | 1.2970 | 0.5895 |
| 1.2294 | 5.0 | 798 | 1.2293 | 0.6071 |
| 1.1097 | 6.0 | 958 | 1.1892 | 0.6300 |
| 1.0342 | 7.0 | 1118 | 1.1048 | 0.6546 |
| 0.9644 | 8.0 | 1278 | 1.0731 | 0.6678 |
| 0.8534 | 9.0 | 1437 | 1.0367 | 0.6766 |
| 0.8037 | 10.0 | 1597 | 1.0211 | 0.6802 |
| 0.7765 | 11.0 | 1757 | 1.0073 | 0.6885 |
| 0.7658 | 11.94 | 1908 | 1.0123 | 0.6837 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
preetham/rpanda_lora
|
preetham
| 2023-07-13T10:08:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-13T09:52:28Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks panda
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - preetham/rpanda_lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks panda using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
Devops-hestabit/Othehalf-1.3b-onnx
|
Devops-hestabit
| 2023-07-13T10:08:08Z | 4 | 0 |
transformers
|
[
"transformers",
"onnx",
"gpt_neox",
"text-generation",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T09:25:46Z |
---
license: creativeml-openrail-m
---
|
ZoeVN/segformer-scene-parse-150-lora-50-epoch
|
ZoeVN
| 2023-07-13T10:02:46Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T10:02:45Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
thomas2112/ppo-SnowballTarget
|
thomas2112
| 2023-07-13T10:01:51Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-05-30T20:11:10Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: thomas2112/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fnlp/moss-rlhf-reward-model-7B-en
|
fnlp
| 2023-07-13T09:54:07Z | 0 | 9 | null |
[
"llm",
"reward model",
"moss",
"rlhf",
"zh",
"arxiv:2307.04964",
"license:agpl-3.0",
"region:us"
] | null | 2023-07-13T03:12:42Z |
---
license: agpl-3.0
language:
- zh
tags:
- llm
- reward model
- moss
- rlhf
---
# MOSS-RLHF
### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>👉 <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://openlmlab.github.io/MOSS-RLHF/" target="_blank">[Home page]*
## 🌟 News
### 👉 Wed, 12. July 2023. We have released Chinese reward model based OpenChineseLlama-7B!
[moss-rlhf-reward-model-7B-zh](https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main)
<br>
### 👉 Thu, 13. July 2023. We have released English reward model and SFT model based Llama-7B!
[moss-rlhf-reward-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-reward-model-7B-en)
[moss-rlhf-sft-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-sft-model-7B-en)
<br>
## 🧾 Open-source List
- [x] Open source code for RL training in large language models.
- [x] A 7B Chinese reward model based on openChineseLlama.
- [x] A 7B English reward model based on Llama-7B.
- [x] SFT model for English.
- [ ] Policy model for English after RLHF.
- ...
## 🌠 Introduction
Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle.
In this technical report, we intend to help researchers to train their models stably with human feedback.
Contributions are summarized as follows:
1) We release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data;
2) We conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training;
3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans.
## 🔩 Requirements & Setup
This repository works on Python 3.8 and PyTorch 1.13.1.
We recommend using the **conda** virtual environment to run the code.
#### Step 1: Create a new Python virtual environment
```bash
conda update conda -n base -c defaults
conda create -n rlhf python=3.8
conda activate rlhf
```
#### Step 2: Install PyTorch and TensorBoard
```bash
conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
```
#### Step 3: Install the remaining dependencies
```bash
conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
apt install libaio-dev
DS_BUILD_OPS=1 pip install deepspeed
```
## ✨ Start training your own model!
Run code in a few steps.
### Step 1: Recover Reward model weights
We can not directly release the full weight of the reward model because of protocol restrictions.
You can merge the diff weight with original Llama-7B to recover the reward model we used.
We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps:
```bash
1) Download the weight diff into your local machine. The weight diff is located at:
# For English:
TODO
# For Chinese:
https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main
2) Merge the weight diff with the original Llama-7B:
# For English:
# Reward model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-en/diff --path_tuned ./models/moss-rlhf-reward-model-7B-en/recover --model_type reward
# SFT model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-sft-model-7B-en/diff --path_tuned ./models/moss-rlhf-sft-model-7B-en/recover --model_type sft
# Policy model
TODO
# For Chinese:
python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover
```
### Step 2: Select your own SFT model.
Because of some limitations, we can not release the **Chinese** SFT model (Currently).
You can use your own SFT model, or a strong base model instead of our SFT model.
### Step 3: Start training
Run the command below.
```
# For Chinese:
# You need to use your own sft model currently.
bash run_zh.sh
# For English:
# We have loaded the sft model and reward model to huggingface.
bash run_en.sh
```
## Citation
```bibtex
@article{zheng2023secrets,
title={Secrets of RLHF in Large Language Models Part I: PPO},
author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang},
year={2023},
eprint={2307.04964},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
vincenzodeleo/distilbert-base-uncased-finetuned-squad
|
vincenzodeleo
| 2023-07-13T09:53:55Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-12T16:59:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
thomas2112/dqn-SpaceInvadersNoFrameskip-v4
|
thomas2112
| 2023-07-13T09:39:48Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-15T12:26:01Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 622.50 +/- 94.51
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga thomas2112 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga thomas2112 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga thomas2112
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
vincenzodeleo/bert-base-cased-squad2-finetuned-squad
|
vincenzodeleo
| 2023-07-13T09:36:56Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-13T09:13:53Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-cased-squad2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-squad2-finetuned-squad
This model is a fine-tuned version of [deepset/bert-base-cased-squad2](https://huggingface.co/deepset/bert-base-cased-squad2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dada325/Taxi-v3-qLearning-test
|
dada325
| 2023-07-13T09:34:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T09:34:46Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-qLearning-test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dada325/Taxi-v3-qLearning-test", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ashokurlana/mBART-TeSum
|
ashokurlana
| 2023-07-13T09:33:38Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"generated_from_trainer",
"multilingual",
"te",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-12T18:19:29Z |
---
language:
- multilingual
- te
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mBART-TeSum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mBART-TeSum
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on [TeSum](https://ltrc.iiit.ac.in/showfile.php?filename=downloads/teSum/) dataset. More details about the training and analysis mentioned in the [paper](https://aclanthology.org/2022.lrec-1.614.pdf).
## Model description
mBART-50 is a multilingual Sequence-to-Sequence model. It was introduced to show that multilingual translation models can be created through multilingual fine-tuning.
Instead of fine-tuning on one direction, a pre-trained model is fine-tuned on many directions simultaneously.
mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages.
The pre-training objective is explained below.
**Multilingual Denoising Pretraining**: The model incorporates N languages by concatenating data:
`D = {D1, ..., DN }` where each Di is a collection of monolingual documents in language `i`.
The source documents are noised using two schemes, first randomly shuffling the original sentences' order, and second a novel in-filling scheme,
where spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text.
35% of each instance's words are masked by random sampling a span length according to a Poisson distribution `(λ = 3.5)`.
The decoder input is the original text with one position offset. A language id symbol `LID` is used as the initial token to predict
the sentence.
## Intended uses & limitations
mbart-large-50 is pre-trained model and primarily aimed at being fine-tuned on translation tasks. It can also be fine-tuned on other multilingual sequence-to-sequence tasks. See the [model hub](https://huggingface.co/models?other=mbart-50) to look for fine-tuned versions.
## Training
As the model is multilingual, it expects the sequences in a different format. A special language id token is used as a prefix in both the source and target text.
The text format is `[lang_code] X [eos]` with `X` being the source or target text respectively and lang_code is `source_lang_code` for source text
and `tgt_lang_code` for target text. `bos` is never used. Once the examples are prepared in this format, it can be trained as any other sequence-to-sequence model.
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("ashokurlana/mBART-TeSum")
tokenizer = MBart50TokenizerFast.from_pretrained("ashokurlana/mBART-TeSum", src_lang="te_IN", tgt_lang="te_IN")
src_text = "తెలంగాణలో సచలనం సృష్టించిన టీఎస్పీఎస్సీ పేపర్ లీకేజీ వ్యవహారంపై ప్రభుత్వం తరపున మంత్రి కేటీఆర్ తొలిసారి స్పందించారు. ఇది వ్యవస్థ వైఫల్యం కాదని.., ఇద్దరు వ్యక్తులు చేసిన తప్పు అని కేటీఆర్ వెల్లడించారు. ఈ వ్యవహారం వెనుక ఏ పార్టీకి చెందిన వారున్నా.., ఎంతటి వారైనా కఠినంగా శిక్షిస్తామని చెప్పారు. నిరుద్యోగుల్లో ఆందోళనలు రేకెత్తించేలా ప్రతిపక్షాలు మాట్లాడటం సరికాదని హితవు పలికారు."
tgt_text = "తెలంగాణలో సచలనం సృష్టించిన టీఎస్ పీఎస్సీ పేపర్ లీకేజీ వ్యవహారంపై ప్రభుత్వం తరపున మంత్రి కేటీఆర్ స్పందించారు. ఇది వ్యవస్థ వైఫల్యం కాదని, ఇద్దరు వ్యక్తులు చేసిన తప్పు అని, ఈ వ్యవహారం వెనుక ఏ పార్టీకి చెందిన వారున్నా కఠినంగా శిక్షిస్తామని చెప్పారు."
model_inputs = tokenizer(src_text, return_tensors="pt")
with tokenizer.as_target_tokenizer():
labels = tokenizer(tgt_text, return_tensors="pt").input_ids
model(**model_inputs, labels=labels) # forward pass
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Evaluation results
It achieves the following results on the evaluation set:
- Loss: 1.4009
- Rouge1: 32.8603
- Rouge2: 12.2822
- Rougel: 31.7473
- Rougelsum: 32.505
- Gen Len: 117.6326
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
### BibTeX entry and citation info
```
@inproceedings{urlana-etal-2022-tesum,
title = "{T}e{S}um: Human-Generated Abstractive Summarization Corpus for {T}elugu",
author = "Urlana, Ashok and
Surange, Nirmal and
Baswani, Pavan and
Ravva, Priyanka and
Shrivastava, Manish",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.614",
pages = "5712--5722",
abstract = "Expert human annotation for summarization is definitely an expensive task, and can not be done on huge scales. But with this work, we show that even with a crowd sourced summary generation approach, quality can be controlled by aggressive expert informed filtering and sampling-based human evaluation. We propose a pipeline that crowd-sources summarization data and then aggressively filters the content via: automatic and partial expert evaluation. Using this pipeline we create a high-quality Telugu Abstractive Summarization dataset (TeSum) which we validate with sampling-based human evaluation. We also provide baseline numbers for various models commonly used for summarization. A number of recently released datasets for summarization, scraped the web-content relying on the assumption that summary is made available with the article by the publishers. While this assumption holds for multiple resources (or news-sites) in English, it should not be generalised across languages without thorough analysis and verification. Our analysis clearly shows that this assumption does not hold true for most Indian language news resources. We show that our proposed filtration pipeline can even be applied to these large-scale scraped datasets to extract better quality article-summary pairs.",
}
```
|
Fixedbot/ppo-Huggy
|
Fixedbot
| 2023-07-13T09:33:07Z | 23 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-13T09:32:52Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Fixedbot/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
predictia/cerra_tas_vqvae
|
predictia
| 2023-07-13T09:31:45Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"climate",
"transformers",
"image-to-image",
"es",
"en",
"license:apache-2.0",
"region:us"
] |
image-to-image
| 2023-06-28T11:28:11Z |
---
license: apache-2.0
language:
- es
- en
metrics:
- mse
pipeline_tag: image-to-image
tags:
- climate
- transformers
---
# Europe Reanalysis Super Resolution
The aim of the project is to create a Machine learning (ML) model that can generate high-resolution regional reanalysis data (similar to the one produced by CERRA) by downscaling global reanalysis data from ERA5.
This will be accomplished by using state-of-the-art Deep Learning (DL) techniques like U-Net, conditional GAN, and diffusion models (among others). Additionally, an ingestion module will be implemented to assess the possible benefit of using CERRA pseudo-observations as extra predictors. Once the model is designed and trained, a detailed validation framework takes the place.
It combines classical deterministic error metrics with in-depth validations, including time series, maps, spatio-temporal correlations, and computer vision metrics, disaggregated by months, seasons, and geographical regions, to evaluate the effectiveness of the model in reducing errors and representing physical processes. This level of granularity allows for a more comprehensive and accurate assessment, which is critical for ensuring that the model is effective in practice.
Moreover, tools for interpretability of DL models can be used to understand the inner workings and decision-making processes of these complex structures by analyzing the activations of different neurons and the importance of different features in the input data.
This work is funded by [Code for Earth 2023](https://codeforearth.ecmwf.int/) initiative.
|
n0madic/elegantEntropy_v1.2
|
n0madic
| 2023-07-13T09:29:46Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T08:25:55Z |
---
license: creativeml-openrail-m
---
|
KyriaAnnwyn/vit-large-artifacts
|
KyriaAnnwyn
| 2023-07-13T09:26:30Z | 55 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-07T12:11:49Z |
---
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-large-artifacts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-artifacts
This model is a fine-tuned version of [kakaobrain/vit-large-patch16-512](https://huggingface.co/kakaobrain/vit-large-patch16-512) on the KyriaAnnwyn/artifacts_ds dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5995
- Accuracy: 0.6705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7001 | 0.01 | 100 | 0.6414 | 0.6559 |
| 0.6288 | 0.01 | 200 | 0.6666 | 0.6559 |
| 0.7237 | 0.02 | 300 | 0.7087 | 0.6559 |
| 0.8741 | 0.03 | 400 | 0.6739 | 0.6257 |
| 0.6093 | 0.04 | 500 | 0.6462 | 0.6559 |
| 0.5801 | 0.04 | 600 | 0.6822 | 0.6559 |
| 0.594 | 0.05 | 700 | 1.9948 | 0.6395 |
| 0.7724 | 0.06 | 800 | 0.6566 | 0.6553 |
| 0.6976 | 0.07 | 900 | 0.6774 | 0.6325 |
| 0.6583 | 0.07 | 1000 | 0.7175 | 0.3517 |
| 0.6779 | 0.08 | 1100 | 0.7012 | 0.6559 |
| 0.6478 | 0.09 | 1200 | 0.6336 | 0.6559 |
| 0.7405 | 0.1 | 1300 | 0.6577 | 0.6559 |
| 0.7362 | 0.1 | 1400 | 0.6630 | 0.6142 |
| 0.535 | 0.11 | 1500 | 0.7445 | 0.6559 |
| 0.7338 | 0.12 | 1600 | 0.7046 | 0.4718 |
| 0.6519 | 0.13 | 1700 | 0.6601 | 0.6426 |
| 0.5969 | 0.13 | 1800 | 0.6518 | 0.6559 |
| 0.5992 | 0.14 | 1900 | 0.6544 | 0.6559 |
| 0.5762 | 0.15 | 2000 | 0.6608 | 0.6559 |
| 0.6483 | 0.16 | 2100 | 0.6436 | 0.6331 |
| 0.7594 | 0.16 | 2200 | 0.7562 | 0.5213 |
| 0.6423 | 0.17 | 2300 | 0.6326 | 0.6433 |
| 0.7006 | 0.18 | 2400 | 0.6669 | 0.6108 |
| 0.833 | 0.19 | 2500 | 0.7043 | 0.6559 |
| 0.6133 | 0.19 | 2600 | 0.6356 | 0.6532 |
| 0.5285 | 0.2 | 2700 | 0.6619 | 0.6606 |
| 0.7209 | 0.21 | 2800 | 0.7306 | 0.4196 |
| 0.682 | 0.22 | 2900 | 0.6400 | 0.6539 |
| 0.7148 | 0.22 | 3000 | 0.6421 | 0.6559 |
| 0.6288 | 0.23 | 3100 | 0.7416 | 0.6559 |
| 0.666 | 0.24 | 3200 | 0.6368 | 0.6293 |
| 0.772 | 0.25 | 3300 | 0.6973 | 0.4985 |
| 0.6778 | 0.25 | 3400 | 0.6288 | 0.6604 |
| 0.5939 | 0.26 | 3500 | 0.6566 | 0.6559 |
| 0.6246 | 0.27 | 3600 | 0.6347 | 0.6618 |
| 0.649 | 0.28 | 3700 | 0.6353 | 0.6277 |
| 0.7122 | 0.28 | 3800 | 0.6407 | 0.6559 |
| 0.6292 | 0.29 | 3900 | 0.6776 | 0.6560 |
| 0.6079 | 0.3 | 4000 | 0.6220 | 0.6609 |
| 0.6971 | 0.31 | 4100 | 0.6258 | 0.6394 |
| 0.7131 | 0.31 | 4200 | 0.7202 | 0.6556 |
| 0.5346 | 0.32 | 4300 | 0.6394 | 0.6571 |
| 0.5801 | 0.33 | 4400 | 0.6960 | 0.6664 |
| 0.6806 | 0.34 | 4500 | 0.6339 | 0.6348 |
| 0.6245 | 0.34 | 4600 | 0.6226 | 0.6477 |
| 0.6905 | 0.35 | 4700 | 0.6203 | 0.6533 |
| 0.741 | 0.36 | 4800 | 0.6464 | 0.6680 |
| 0.5712 | 0.37 | 4900 | 0.6162 | 0.6640 |
| 0.5566 | 0.37 | 5000 | 0.6182 | 0.6507 |
| 0.6443 | 0.38 | 5100 | 0.6457 | 0.6664 |
| 0.6107 | 0.39 | 5200 | 0.6092 | 0.6617 |
| 0.5824 | 0.4 | 5300 | 0.6383 | 0.6571 |
| 0.4775 | 0.4 | 5400 | 0.6606 | 0.6621 |
| 0.7114 | 0.41 | 5500 | 0.6179 | 0.6619 |
| 0.7701 | 0.42 | 5600 | 0.7982 | 0.4217 |
| 0.6974 | 0.42 | 5700 | 0.6223 | 0.6540 |
| 0.6669 | 0.43 | 5800 | 0.6249 | 0.6559 |
| 0.6982 | 0.44 | 5900 | 0.6287 | 0.6564 |
| 0.5811 | 0.45 | 6000 | 0.6104 | 0.6506 |
| 0.4347 | 0.45 | 6100 | 1.0475 | 0.6559 |
| 0.5885 | 0.46 | 6200 | 0.6125 | 0.6552 |
| 0.6867 | 0.47 | 6300 | 0.6435 | 0.6468 |
| 0.6088 | 0.48 | 6400 | 0.6047 | 0.6623 |
| 0.8194 | 0.48 | 6500 | 0.6972 | 0.6589 |
| 0.8182 | 0.49 | 6600 | 0.6053 | 0.6644 |
| 0.6104 | 0.5 | 6700 | 0.7375 | 0.6571 |
| 0.5552 | 0.51 | 6800 | 0.6231 | 0.6402 |
| 0.6451 | 0.51 | 6900 | 0.6452 | 0.6561 |
| 0.7849 | 0.52 | 7000 | 0.6177 | 0.6612 |
| 0.64 | 0.53 | 7100 | 0.6307 | 0.6234 |
| 0.6393 | 0.54 | 7200 | 0.6130 | 0.6554 |
| 0.8326 | 0.54 | 7300 | 0.7210 | 0.6421 |
| 0.6579 | 0.55 | 7400 | 0.6227 | 0.6544 |
| 0.5195 | 0.56 | 7500 | 0.6619 | 0.6557 |
| 0.6197 | 0.57 | 7600 | 0.6354 | 0.6498 |
| 0.8507 | 0.57 | 7700 | 0.6820 | 0.6550 |
| 0.7163 | 0.58 | 7800 | 0.6720 | 0.5328 |
| 0.6896 | 0.59 | 7900 | 0.6530 | 0.6386 |
| 0.62 | 0.6 | 8000 | 0.6296 | 0.6559 |
| 0.8254 | 0.6 | 8100 | 0.6752 | 0.6200 |
| 0.7653 | 0.61 | 8200 | 0.7118 | 0.6558 |
| 0.7742 | 0.62 | 8300 | 0.6262 | 0.6497 |
| 0.6861 | 0.63 | 8400 | 0.6799 | 0.5566 |
| 0.5652 | 0.63 | 8500 | 0.6708 | 0.6559 |
| 0.7486 | 0.64 | 8600 | 0.6319 | 0.6559 |
| 0.6204 | 0.65 | 8700 | 0.6407 | 0.6530 |
| 0.673 | 0.66 | 8800 | 0.7154 | 0.4672 |
| 0.7272 | 0.66 | 8900 | 0.6323 | 0.6528 |
| 0.7364 | 0.67 | 9000 | 0.6436 | 0.6188 |
| 0.71 | 0.68 | 9100 | 0.6507 | 0.5924 |
| 0.6767 | 0.69 | 9200 | 0.6347 | 0.6575 |
| 0.7046 | 0.69 | 9300 | 0.6723 | 0.6127 |
| 0.7486 | 0.7 | 9400 | 0.6328 | 0.6485 |
| 0.7646 | 0.71 | 9500 | 0.6244 | 0.6550 |
| 0.5971 | 0.72 | 9600 | 0.6610 | 0.6558 |
| 0.6195 | 0.72 | 9700 | 0.6219 | 0.6515 |
| 0.6891 | 0.73 | 9800 | 0.6300 | 0.6619 |
| 0.6829 | 0.74 | 9900 | 0.6312 | 0.6568 |
| 0.4786 | 0.75 | 10000 | 0.7160 | 0.6573 |
| 0.6093 | 0.75 | 10100 | 0.6245 | 0.6503 |
| 0.672 | 0.76 | 10200 | 0.6248 | 0.6577 |
| 0.6734 | 0.77 | 10300 | 0.6541 | 0.6600 |
| 0.7826 | 0.78 | 10400 | 0.6413 | 0.6559 |
| 0.6851 | 0.78 | 10500 | 0.6478 | 0.6006 |
| 0.6776 | 0.79 | 10600 | 0.6453 | 0.6175 |
| 0.7322 | 0.8 | 10700 | 0.6188 | 0.6353 |
| 0.5144 | 0.81 | 10800 | 0.6762 | 0.6571 |
| 0.6977 | 0.81 | 10900 | 0.6559 | 0.6544 |
| 0.5681 | 0.82 | 11000 | 0.7225 | 0.6559 |
| 0.6449 | 0.83 | 11100 | 0.6372 | 0.6576 |
| 0.6067 | 0.83 | 11200 | 0.6207 | 0.6391 |
| 0.5921 | 0.84 | 11300 | 0.6178 | 0.6538 |
| 0.5373 | 0.85 | 11400 | 0.7370 | 0.6559 |
| 0.6926 | 0.86 | 11500 | 0.6346 | 0.6372 |
| 0.6634 | 0.86 | 11600 | 0.6274 | 0.6489 |
| 0.61 | 0.87 | 11700 | 0.6309 | 0.6427 |
| 0.6214 | 0.88 | 11800 | 0.6273 | 0.6480 |
| 0.6202 | 0.89 | 11900 | 0.6255 | 0.6559 |
| 0.6153 | 0.89 | 12000 | 0.6348 | 0.6459 |
| 0.7062 | 0.9 | 12100 | 0.6283 | 0.6512 |
| 0.6977 | 0.91 | 12200 | 0.6159 | 0.6515 |
| 0.6041 | 0.92 | 12300 | 0.6251 | 0.6504 |
| 0.6609 | 0.92 | 12400 | 0.6633 | 0.5870 |
| 0.7565 | 0.93 | 12500 | 0.6200 | 0.6562 |
| 0.6133 | 0.94 | 12600 | 0.6193 | 0.6527 |
| 0.7066 | 0.95 | 12700 | 0.6279 | 0.6180 |
| 0.5706 | 0.95 | 12800 | 0.6128 | 0.6575 |
| 0.6992 | 0.96 | 12900 | 0.6334 | 0.6449 |
| 0.6834 | 0.97 | 13000 | 0.6258 | 0.6591 |
| 0.6069 | 0.98 | 13100 | 0.6290 | 0.6620 |
| 0.743 | 0.98 | 13200 | 0.6110 | 0.6562 |
| 0.5226 | 0.99 | 13300 | 0.6165 | 0.6557 |
| 0.7359 | 1.0 | 13400 | 0.6207 | 0.6376 |
| 0.5812 | 1.01 | 13500 | 0.6192 | 0.6559 |
| 0.666 | 1.01 | 13600 | 0.6347 | 0.6602 |
| 0.5489 | 1.02 | 13700 | 0.6107 | 0.6459 |
| 0.701 | 1.03 | 13800 | 0.6172 | 0.6518 |
| 0.4873 | 1.04 | 13900 | 0.6786 | 0.6559 |
| 0.5807 | 1.04 | 14000 | 0.6636 | 0.6433 |
| 0.6824 | 1.05 | 14100 | 0.6176 | 0.6315 |
| 0.6012 | 1.06 | 14200 | 0.6097 | 0.6617 |
| 0.4865 | 1.07 | 14300 | 0.6103 | 0.6623 |
| 0.5612 | 1.07 | 14400 | 0.6947 | 0.6559 |
| 0.5968 | 1.08 | 14500 | 0.6559 | 0.5981 |
| 0.5657 | 1.09 | 14600 | 0.6076 | 0.6509 |
| 0.4778 | 1.1 | 14700 | 0.6808 | 0.6535 |
| 0.6047 | 1.1 | 14800 | 0.6131 | 0.6480 |
| 0.5999 | 1.11 | 14900 | 0.6120 | 0.6559 |
| 0.5852 | 1.12 | 15000 | 0.6356 | 0.6553 |
| 0.7033 | 1.13 | 15100 | 0.6578 | 0.6647 |
| 0.5925 | 1.13 | 15200 | 0.6153 | 0.6633 |
| 0.5959 | 1.14 | 15300 | 0.6306 | 0.6211 |
| 0.5929 | 1.15 | 15400 | 0.6246 | 0.6655 |
| 0.5621 | 1.16 | 15500 | 0.6126 | 0.6424 |
| 0.5508 | 1.16 | 15600 | 0.6844 | 0.6559 |
| 0.6276 | 1.17 | 15700 | 0.6066 | 0.6531 |
| 1.0359 | 1.18 | 15800 | 0.6271 | 0.6617 |
| 0.6191 | 1.19 | 15900 | 0.6166 | 0.6480 |
| 0.7095 | 1.19 | 16000 | 0.6228 | 0.6462 |
| 0.6567 | 1.2 | 16100 | 0.6066 | 0.6653 |
| 0.5653 | 1.21 | 16200 | 0.6022 | 0.6605 |
| 0.6894 | 1.21 | 16300 | 0.6216 | 0.6568 |
| 0.608 | 1.22 | 16400 | 0.6041 | 0.6559 |
| 0.665 | 1.23 | 16500 | 0.6111 | 0.6564 |
| 0.6753 | 1.24 | 16600 | 0.6138 | 0.6581 |
| 0.6213 | 1.24 | 16700 | 0.6121 | 0.6380 |
| 0.6983 | 1.25 | 16800 | 0.6166 | 0.6661 |
| 0.8521 | 1.26 | 16900 | 0.6202 | 0.6461 |
| 0.4927 | 1.27 | 17000 | 0.6313 | 0.6547 |
| 0.6414 | 1.27 | 17100 | 0.6011 | 0.6667 |
| 0.539 | 1.28 | 17200 | 0.6451 | 0.6664 |
| 0.5118 | 1.29 | 17300 | 0.6243 | 0.6641 |
| 0.7512 | 1.3 | 17400 | 0.6257 | 0.6586 |
| 0.5943 | 1.3 | 17500 | 0.6186 | 0.6423 |
| 0.5861 | 1.31 | 17600 | 0.6435 | 0.6638 |
| 0.7065 | 1.32 | 17700 | 0.6197 | 0.6279 |
| 0.5973 | 1.33 | 17800 | 0.6081 | 0.6535 |
| 0.5997 | 1.33 | 17900 | 0.6053 | 0.6608 |
| 0.7091 | 1.34 | 18000 | 0.6013 | 0.6644 |
| 0.691 | 1.35 | 18100 | 0.6103 | 0.6654 |
| 0.5559 | 1.36 | 18200 | 0.6110 | 0.6658 |
| 0.6309 | 1.36 | 18300 | 0.6067 | 0.6664 |
| 0.6262 | 1.37 | 18400 | 0.6027 | 0.6616 |
| 0.5551 | 1.38 | 18500 | 0.6106 | 0.6671 |
| 0.6703 | 1.39 | 18600 | 0.6043 | 0.6576 |
| 0.6849 | 1.39 | 18700 | 0.6018 | 0.6616 |
| 0.6136 | 1.4 | 18800 | 0.6324 | 0.6629 |
| 0.7075 | 1.41 | 18900 | 0.6057 | 0.6561 |
| 0.6036 | 1.42 | 19000 | 0.6081 | 0.6559 |
| 0.6549 | 1.42 | 19100 | 0.6352 | 0.6655 |
| 0.5168 | 1.43 | 19200 | 0.6042 | 0.6632 |
| 0.5864 | 1.44 | 19300 | 0.6111 | 0.6639 |
| 0.5961 | 1.45 | 19400 | 0.6003 | 0.6644 |
| 0.6077 | 1.45 | 19500 | 0.6125 | 0.6566 |
| 0.6215 | 1.46 | 19600 | 0.6128 | 0.6582 |
| 0.4005 | 1.47 | 19700 | 0.6348 | 0.6642 |
| 0.5689 | 1.48 | 19800 | 0.6355 | 0.6647 |
| 0.6026 | 1.48 | 19900 | 0.6127 | 0.6444 |
| 0.4982 | 1.49 | 20000 | 0.6034 | 0.6654 |
| 0.6189 | 1.5 | 20100 | 0.6202 | 0.6609 |
| 0.5502 | 1.51 | 20200 | 0.6044 | 0.6621 |
| 0.5924 | 1.51 | 20300 | 0.6107 | 0.6445 |
| 0.744 | 1.52 | 20400 | 0.6164 | 0.6559 |
| 0.5582 | 1.53 | 20500 | 0.6166 | 0.6559 |
| 0.6994 | 1.54 | 20600 | 0.6109 | 0.6664 |
| 0.5396 | 1.54 | 20700 | 0.6189 | 0.6670 |
| 0.7232 | 1.55 | 20800 | 0.6104 | 0.6610 |
| 0.9802 | 1.56 | 20900 | 0.6232 | 0.6642 |
| 0.6487 | 1.57 | 21000 | 0.6056 | 0.6505 |
| 0.5932 | 1.57 | 21100 | 0.5980 | 0.6702 |
| 0.7897 | 1.58 | 21200 | 0.6012 | 0.6638 |
| 0.6006 | 1.59 | 21300 | 0.6232 | 0.6672 |
| 0.4481 | 1.6 | 21400 | 0.6124 | 0.6676 |
| 0.6078 | 1.6 | 21500 | 0.6495 | 0.6664 |
| 0.595 | 1.61 | 21600 | 0.7122 | 0.6675 |
| 0.6388 | 1.62 | 21700 | 0.6227 | 0.6671 |
| 0.5731 | 1.62 | 21800 | 0.6252 | 0.6682 |
| 0.8603 | 1.63 | 21900 | 0.6026 | 0.6653 |
| 0.6316 | 1.64 | 22000 | 0.6494 | 0.6669 |
| 0.6712 | 1.65 | 22100 | 0.6097 | 0.6676 |
| 0.6102 | 1.65 | 22200 | 0.6221 | 0.6585 |
| 0.7099 | 1.66 | 22300 | 0.6006 | 0.6658 |
| 0.621 | 1.67 | 22400 | 0.6026 | 0.6626 |
| 0.478 | 1.68 | 22500 | 0.6062 | 0.6624 |
| 0.6106 | 1.68 | 22600 | 0.5990 | 0.6669 |
| 0.5793 | 1.69 | 22700 | 0.5980 | 0.6681 |
| 0.5804 | 1.7 | 22800 | 0.6014 | 0.6626 |
| 0.6304 | 1.71 | 22900 | 0.6107 | 0.6380 |
| 0.7427 | 1.71 | 23000 | 0.6051 | 0.6682 |
| 0.5794 | 1.72 | 23100 | 0.6105 | 0.6611 |
| 0.5084 | 1.73 | 23200 | 0.6643 | 0.6673 |
| 0.6518 | 1.74 | 23300 | 0.6366 | 0.6687 |
| 0.5129 | 1.74 | 23400 | 0.6053 | 0.6682 |
| 0.7593 | 1.75 | 23500 | 0.5977 | 0.6662 |
| 0.6645 | 1.76 | 23600 | 0.5988 | 0.6683 |
| 0.6144 | 1.77 | 23700 | 0.6130 | 0.6673 |
| 0.6855 | 1.77 | 23800 | 0.6192 | 0.6596 |
| 0.559 | 1.78 | 23900 | 0.6208 | 0.6574 |
| 0.4202 | 1.79 | 24000 | 0.6125 | 0.6690 |
| 0.6604 | 1.8 | 24100 | 0.6052 | 0.6685 |
| 0.5487 | 1.8 | 24200 | 0.6086 | 0.6685 |
| 0.6816 | 1.81 | 24300 | 0.5997 | 0.6620 |
| 0.6057 | 1.82 | 24400 | 0.6128 | 0.6530 |
| 0.4335 | 1.83 | 24500 | 0.6121 | 0.6676 |
| 0.6147 | 1.83 | 24600 | 0.6225 | 0.6670 |
| 0.7414 | 1.84 | 24700 | 0.6248 | 0.6718 |
| 0.622 | 1.85 | 24800 | 0.6084 | 0.6722 |
| 0.5356 | 1.86 | 24900 | 0.6003 | 0.6611 |
| 0.7994 | 1.86 | 25000 | 0.6098 | 0.6657 |
| 0.5389 | 1.87 | 25100 | 0.6052 | 0.6633 |
| 0.6985 | 1.88 | 25200 | 0.6073 | 0.6694 |
| 0.652 | 1.89 | 25300 | 0.6040 | 0.6709 |
| 0.5409 | 1.89 | 25400 | 0.6065 | 0.6709 |
| 0.6356 | 1.9 | 25500 | 0.6062 | 0.6699 |
| 0.7588 | 1.91 | 25600 | 0.6025 | 0.6711 |
| 0.5109 | 1.92 | 25700 | 0.5992 | 0.6693 |
| 0.6766 | 1.92 | 25800 | 0.6004 | 0.6693 |
| 0.6517 | 1.93 | 25900 | 0.6020 | 0.6701 |
| 0.6561 | 1.94 | 26000 | 0.5995 | 0.6705 |
| 0.6224 | 1.95 | 26100 | 0.6008 | 0.6717 |
| 0.6054 | 1.95 | 26200 | 0.6005 | 0.6714 |
| 0.5152 | 1.96 | 26300 | 0.6023 | 0.6709 |
| 0.5503 | 1.97 | 26400 | 0.6032 | 0.6706 |
| 0.5101 | 1.98 | 26500 | 0.6067 | 0.6709 |
| 0.5229 | 1.98 | 26600 | 0.6079 | 0.6702 |
| 0.8387 | 1.99 | 26700 | 0.6079 | 0.6700 |
| 0.608 | 2.0 | 26800 | 0.6069 | 0.6699 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu116
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dada325/q-FrozenLake-v1-4x4-noSlippery
|
dada325
| 2023-07-13T09:26:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T09:26:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dada325/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
skywalker7/Reinforce-PixelCopter
|
skywalker7
| 2023-07-13T09:17:26Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T09:17:17Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 45.10 +/- 55.81
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kfkas/LawBot-level1
|
kfkas
| 2023-07-13T09:13:25Z | 8 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T09:13:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
youlun77/finetuning-sentiment-model-25000-samples-BERT
|
youlun77
| 2023-07-13T09:10:41Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T07:30:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-25000-samples-BERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-25000-samples-BERT
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2154
- eval_accuracy: 0.9422
- eval_f1: 0.9427
- eval_runtime: 823.1435
- eval_samples_per_second: 30.371
- eval_steps_per_second: 1.899
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dchaudhari/my_awesome_qa_model_1
|
dchaudhari
| 2023-07-13T08:59:58Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-13T08:06:42Z |
---
license: cc-by-4.0
tags:
- generated_from_keras_callback
model-index:
- name: dchaudhari/my_awesome_qa_model_1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dchaudhari/my_awesome_qa_model_1
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2535
- Validation Loss: 1.2733
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1298, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6348 | 1.5188 | 0 |
| 1.4251 | 1.2733 | 1 |
| 1.2535 | 1.2733 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
soonmo/distilbert-base-uncased-finetuned-clinc
|
soonmo
| 2023-07-13T08:58:26Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-12T01:45:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7754
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2893 | 1.0 | 318 | 3.2831 | 0.7397 |
| 2.6289 | 2.0 | 636 | 1.8731 | 0.8345 |
| 1.5481 | 3.0 | 954 | 1.1580 | 0.89 |
| 1.0137 | 4.0 | 1272 | 0.8584 | 0.9077 |
| 0.7969 | 5.0 | 1590 | 0.7754 | 0.9161 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aditii09/whisper_eng_asr
|
aditii09
| 2023-07-13T08:58:20Z | 76 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-13T08:45:39Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- no
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-base
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 5.008769117619326
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 12.84936273212057
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language: hi
metrics:
- name: Test WER
type: wer
value: 131
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-base")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-base")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-base")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Base on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-base")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
5.082316555716899
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-base",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
digiplay/helloFlatAnime_v1
|
digiplay
| 2023-07-13T08:57:50Z | 1,606 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T07:42:03Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/102893/helloflatanime
Original Author's DEMO images :
,colored%20hair,%20(20%20years%20old%20woman%20in%20a%20tanktop,short%20pants,in%20the%20water%20on%20a.jpeg)
|
gabrielgme/falcon-7b-spider-with-schema
|
gabrielgme
| 2023-07-13T08:44:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-12T13:21:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
seongj/polyglot-ko-1.3b-quant
|
seongj
| 2023-07-13T08:24:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-12T10:52:42Z |
Pytorch Quantized Model
- Dynamic Quantization : INT8
|
uripper/AVA
|
uripper
| 2023-07-13T08:15:52Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-22T20:54:37Z |
---
license: cc
widget:
- text: "Movie: Parasite Score:"
example_title: "Parasite"
- text: "Movie: Come and See Score:"
example_title: "Come and See"
- text: "Movie: Harakiri Score:"
example_title: "Harakiri"
---
# Review Training Bot
This model was trained for the purpose of generating scores and reviews for any given movie. It is fine-tuned on distilgpt2 as a baseline and trained on a custom dataset created by scraping around 120k letterboxd reviews. The current state of the model can get the correct formatting reliably but oftentimes is prone to gibberish. Further training will hopefully add coherency. It is in version 0.1 currently.
## Intended uses & limitations
This model is intended to be used for entertainment.
Limitations for this model will be much of the same as distilgpt2 which can be viewed here https://huggingface.co/distilgpt2. These may include persistent biases. Another issue may be through language specifically on letterboxd that the algorithm may not be able to understand. i.e. an LGBT+ film on letterboxd may have multiple reviews that mention the word "gay" positively, this model has not been able to understand this contextual usage and will use the word as a slur. As the current model also struggles to find a connection between movie titles and the reviews, this could happen with any entered movie.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 10
- eval_batch_size: 20
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Tokenizers 0.12.1
|
Ablustrund/moss-rlhf-reward-model-7B-zh
|
Ablustrund
| 2023-07-13T08:10:42Z | 3 | 23 | null |
[
"llm",
"reward model",
"moss",
"rlhf",
"zh",
"arxiv:2307.04964",
"license:agpl-3.0",
"region:us"
] | null | 2023-07-12T02:27:02Z |
---
license: agpl-3.0
language:
- zh
tags:
- llm
- reward model
- moss
- rlhf
---
# MOSS-RLHF
### *MOSS-RLHF & "Secrets of RLHF in Large Language Models Part I: PPO" <br>👉 <a href="https://arxiv.org/abs/2307.04964" target="_blank">[Technical report]</a> <a href="https://openlmlab.github.io/MOSS-RLHF/" target="_blank">[Home page]*
## 🌟 News
### 👉 Wed, 12. July 2023. We have released Chinese reward model based OpenChineseLlama-7B!
[moss-rlhf-reward-model-7B-zh](https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main)
<br>
### 👉 Thu, 13. July 2023. We have released English reward model and SFT model based Llama-7B!
[moss-rlhf-reward-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-reward-model-7B-en)
[moss-rlhf-sft-model-7B-en](https://huggingface.co/fnlp/moss-rlhf-sft-model-7B-en)
<br>
## 🧾 Open-source List
- [x] Open source code for RL training in large language models.
- [x] A 7B Chinese reward model based on openChineseLlama.
- [x] A 7B English reward model based on Llama-7B.
- [x] SFT model for English.
- [ ] Policy model for English after RLHF.
- ...
## 🌠 Introduction
Due to the challenges of reward design, environment interaction, and agent training, coupled with huge trial and error cost of large language models, there is a significant barrier for AI researchers to motivate the development of technical alignment and safe landing of LLMs. The stable training of RLHF has still been a puzzle.
In this technical report, we intend to help researchers to train their models stably with human feedback.
Contributions are summarized as follows:
1) We release competitive Chinese and English reward models, respectively, which have good cross-model generalization ability, alleviating the cost of relabeling human preference data;
2) We conduct in-depth analysis on the inner workings of PPO algorithm and propose the PPO-max algorithm to ensure stable model training;
3) We release the complete PPO-max codes to ensure that the LLMs in the current SFT stage can be better aligned with humans.
## 🔩 Requirements & Setup
This repository works on Python 3.8 and PyTorch 1.13.1.
We recommend using the **conda** virtual environment to run the code.
#### Step 1: Create a new Python virtual environment
```bash
conda update conda -n base -c defaults
conda create -n rlhf python=3.8
conda activate rlhf
```
#### Step 2: Install PyTorch and TensorBoard
```bash
conda install pytorch==1.13.1 pytorch-cuda=11.7 tensorboard -c pytorch -c nvidia
```
#### Step 3: Install the remaining dependencies
```bash
conda install datasets accelerate safetensors chardet cchardet -c huggingface -c conda-forge
pip3 install transformers sentencepiece einops triton==1.0.0 rouge jionlp==1.4.14 nltk sacrebleu cpm_kernels
apt install libaio-dev
DS_BUILD_OPS=1 pip install deepspeed
```
## ✨ Start training your own model!
Run code in a few steps.
### Step 1: Recover Reward model weights
We can not directly release the full weight of the reward model because of protocol restrictions.
You can merge the diff weight with original Llama-7B to recover the reward model we used.
We upload the diff models, thanks to tatsu-lab, you can recover the reward model follow these steps:
```bash
1) Download the weight diff into your local machine. The weight diff is located at:
# For English:
TODO
# For Chinese:
https://huggingface.co/Ablustrund/moss-rlhf-reward-model-7B-zh/tree/main
2) Merge the weight diff with the original Llama-7B:
# For English:
# Reward model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-en/diff --path_tuned ./models/moss-rlhf-reward-model-7B-en/recover --model_type reward
# SFT model
python merge_weight_en.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-sft-model-7B-en/diff --path_tuned ./models/moss-rlhf-sft-model-7B-en/recover --model_type sft
# Policy model
TODO
# For Chinese:
python merge_weight_zh.py recover --path_raw decapoda-research/llama-7b-hf --path_diff ./models/moss-rlhf-reward-model-7B-zh/diff --path_tuned ./models/moss-rlhf-reward-model-7B-zh/recover
```
### Step 2: Select your own SFT model.
Because of some limitations, we can not release the **Chinese** SFT model (Currently).
You can use your own SFT model, or a strong base model instead of our SFT model.
### Step 3: Start training
Run the command below.
```
# For Chinese:
# You need to use your own sft model currently.
bash run_zh.sh
# For English:
# We have loaded the sft model and reward model to huggingface.
bash run_en.sh
```
## Citation
```bibtex
@article{zheng2023secrets,
title={Secrets of RLHF in Large Language Models Part I: PPO},
author={Rui Zheng and Shihan Dou and Songyang Gao and Wei Shen and Binghai Wang and Yan Liu and Senjie Jin and Qin Liu and Limao Xiong and Lu Chen and Zhiheng Xi and Yuhao Zhou and Nuo Xu and Wenbin Lai and Minghao Zhu and Rongxiang Weng and Wensen Cheng and Cheng Chang and Zhangyue Yin and Yuan Hua and Haoran Huang and Tianxiang Sun and Hang Yan and Tao Gui and Qi Zhang and Xipeng Qiu and Xuanjing Huang},
year={2023},
eprint={2307.04964},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
yubuu/path-to-save-model
|
yubuu
| 2023-07-13T08:03:07Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T07:51:30Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - yubuu/path-to-save-model
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
haxett333/RL-Reinforce-100TrainEpisodesInsteadof1000
|
haxett333
| 2023-07-13T08:00:13Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T08:00:09Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: RL-Reinforce-100TrainEpisodesInsteadof1000
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 98.70 +/- 36.77
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nicotaroni/fine_tuned_classification
|
nicotaroni
| 2023-07-13T07:58:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-12T09:04:12Z |
---
pipeline_tag: text2text-generation
---
|
saeedehj/led-base-finetune-cnn
|
saeedehj
| 2023-07-13T07:50:12Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-12T22:27:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-base-16384-finetune-cnn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-base-16384-finetune-cnn
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2020
- Rouge1: 24.2258
- Rouge2: 9.0151
- Rougel: 19.0336
- Rougelsum: 22.2604
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8988 | 1.0 | 2000 | 2.0031 | 25.1709 | 10.0426 | 20.1311 | 23.1639 | 20.0 |
| 1.6038 | 2.0 | 4000 | 2.0314 | 25.0213 | 9.8701 | 19.8987 | 23.0129 | 20.0 |
| 1.3352 | 3.0 | 6000 | 2.1124 | 24.99 | 9.905 | 19.9566 | 23.0973 | 20.0 |
| 1.1173 | 4.0 | 8000 | 2.2055 | 25.0568 | 10.0949 | 19.9602 | 23.18 | 20.0 |
| 0.9566 | 5.0 | 10000 | 2.3262 | 24.941 | 9.5856 | 19.6285 | 23.042 | 20.0 |
| 0.7986 | 6.0 | 12000 | 2.4489 | 24.4114 | 9.2808 | 19.3296 | 22.5481 | 20.0 |
| 0.6685 | 7.0 | 14000 | 2.5211 | 24.467 | 9.5124 | 19.2685 | 22.5624 | 20.0 |
| 0.5601 | 8.0 | 16000 | 2.6299 | 24.6939 | 9.6533 | 19.4627 | 22.8048 | 20.0 |
| 0.4757 | 9.0 | 18000 | 2.7185 | 24.2098 | 9.1232 | 19.0181 | 22.4085 | 20.0 |
| 0.3926 | 10.0 | 20000 | 2.7947 | 24.5092 | 9.3964 | 19.2593 | 22.5592 | 20.0 |
| 0.3391 | 11.0 | 22000 | 2.8626 | 24.4731 | 9.3634 | 19.2966 | 22.5688 | 20.0 |
| 0.2872 | 12.0 | 24000 | 2.9175 | 24.5587 | 9.3888 | 19.3335 | 22.6443 | 20.0 |
| 0.2479 | 13.0 | 26000 | 2.9658 | 24.2983 | 9.1038 | 19.019 | 22.3675 | 20.0 |
| 0.213 | 14.0 | 28000 | 3.0273 | 24.4196 | 9.1481 | 19.0458 | 22.5135 | 20.0 |
| 0.1828 | 15.0 | 30000 | 3.0751 | 24.3283 | 9.2334 | 18.9771 | 22.3322 | 20.0 |
| 0.1608 | 16.0 | 32000 | 3.1185 | 24.3965 | 9.2047 | 19.0899 | 22.4666 | 20.0 |
| 0.1442 | 17.0 | 34000 | 3.1494 | 24.3832 | 9.1915 | 19.077 | 22.4366 | 20.0 |
| 0.1293 | 18.0 | 36000 | 3.1738 | 24.3796 | 9.1132 | 19.1015 | 22.3862 | 20.0 |
| 0.1165 | 19.0 | 38000 | 3.2073 | 24.2804 | 9.1018 | 19.0692 | 22.3023 | 20.0 |
| 0.1118 | 20.0 | 40000 | 3.2020 | 24.2258 | 9.0151 | 19.0336 | 22.2604 | 20.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jslin09/LegalChatbot-bloom-3b
|
jslin09
| 2023-07-13T07:45:16Z | 19 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-06T02:44:57Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
JeffreyHuang/llm-selector
|
JeffreyHuang
| 2023-07-13T07:30:31Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T04:16:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: llm-selector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llm-selector
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7315
- Accuracy: 0.5048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 118 | 1.8920 | 0.3714 |
| No log | 2.0 | 236 | 1.7753 | 0.5143 |
| No log | 3.0 | 354 | 1.7671 | 0.4952 |
| No log | 4.0 | 472 | 1.7441 | 0.5048 |
| 1.8665 | 5.0 | 590 | 1.7315 | 0.5048 |
| 1.8665 | 6.0 | 708 | 1.7413 | 0.5048 |
| 1.8665 | 7.0 | 826 | 1.7378 | 0.4667 |
| 1.8665 | 8.0 | 944 | 1.7426 | 0.4667 |
| 1.7254 | 9.0 | 1062 | 1.7513 | 0.4476 |
| 1.7254 | 10.0 | 1180 | 1.7513 | 0.4476 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
K024/chatglm2-6b-int4g32
|
K024
| 2023-07-13T07:25:25Z | 53 | 3 |
transformers
|
[
"transformers",
"ChatGLM2Model",
"glm",
"chatglm",
"thudm",
"zh",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-13T07:09:00Z |
---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM2 6b int4 g32 量化模型
详情参考 [K024/chatglm-q](https://github.com/K024/chatglm-q)。
See [K024/chatglm-q](https://github.com/K024/chatglm-q) for more details.
```python
import torch
from chatglm_q.decoder import ChatGLMDecoder, chat_template
device = torch.device("cuda")
decoder = ChatGLMDecoder.from_pretrained("K024/chatglm2-6b-int4g32", device=device)
prompt = chat_template([], "我是谁?")
for text in decoder.generate(prompt):
print(text)
```
模型权重按 ChatGLM2-6b 许可发布,见 [MODEL LICENSE](https://huggingface.co/THUDM/chatglm2-6b/blob/main/MODEL_LICENSE)。
Model weights are released under the same license as ChatGLM2-6b, see [MODEL LICENSE](https://huggingface.co/THUDM/chatglm2-6b/blob/main/MODEL_LICENSE).
|
K024/chatglm2-6b-int8
|
K024
| 2023-07-13T07:18:11Z | 49 | 1 |
transformers
|
[
"transformers",
"ChatGLM2Model",
"glm",
"chatglm",
"thudm",
"zh",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-07-13T07:13:41Z |
---
language:
- zh
- en
tags:
- glm
- chatglm
- thudm
---
# ChatGLM2 6b int8 量化模型
详情参考 [K024/chatglm-q](https://github.com/K024/chatglm-q)。
See [K024/chatglm-q](https://github.com/K024/chatglm-q) for more details.
```python
import torch
from chatglm_q.decoder import ChatGLMDecoder, chat_template
device = torch.device("cuda")
decoder = ChatGLMDecoder.from_pretrained("K024/chatglm2-6b-int8", device=device)
prompt = chat_template([], "我是谁?")
for text in decoder.generate(prompt):
print(text)
```
模型权重按 ChatGLM2-6b 许可发布,见 [MODEL LICENSE](https://huggingface.co/THUDM/chatglm2-6b/blob/main/MODEL_LICENSE)。
Model weights are released under the same license as ChatGLM2-6b, see [MODEL LICENSE](https://huggingface.co/THUDM/chatglm2-6b/blob/main/MODEL_LICENSE).
|
vineetsharma/dqn-SpaceInvadersNoFrameskip-v4
|
vineetsharma
| 2023-07-13T07:13:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T07:12:43Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 560.00 +/- 101.24
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vineetsharma -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vineetsharma -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga vineetsharma
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
smithlai/qtable-taxi-v3
|
smithlai
| 2023-07-13T07:12:19Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T07:10:33Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: qtable-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="smithlai/qtable-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
preetham/rpanda1
|
preetham
| 2023-07-13T07:10:56Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T06:22:15Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks panda
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - preetham/rpanda1
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks panda using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
KevinHemsig/my_awesome_qa_model
|
KevinHemsig
| 2023-07-13T07:05:56Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-11T04:30:18Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: KevinHemsig/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KevinHemsig/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5159
- Validation Loss: 1.6940
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.3612 | 2.0301 | 0 |
| 1.7557 | 1.6940 | 1 |
| 1.5159 | 1.6940 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ajaydvrj/dataset2
|
ajaydvrj
| 2023-07-13T06:48:15Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-12T12:07:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: dataset2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dataset2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 5.9615 |
| No log | 2.0 | 2 | 5.8187 |
| No log | 3.0 | 3 | 5.7431 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dchaudhari/my_awesome_qa_model
|
dchaudhari
| 2023-07-13T06:44:21Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-11T05:20:51Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: dchaudhari/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dchaudhari/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.0913
- Validation Loss: 1.1726
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1298, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.2288 | 1.3339 | 0 |
| 1.2300 | 1.1726 | 1 |
| 1.0913 | 1.1726 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
markcberman/distilbert-base-uncased-finetuned-emotion
|
markcberman
| 2023-07-13T06:39:20Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T06:04:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9275012469136824
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.9275
- F1: 0.9275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8326 | 1.0 | 250 | 0.3185 | 0.902 | 0.8983 |
| 0.2499 | 2.0 | 500 | 0.2201 | 0.9275 | 0.9275 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
aniketr/mrl-resnet50
|
aniketr
| 2023-07-13T06:24:25Z | 0 | 3 | null |
[
"code",
"image-classification",
"en",
"dataset:imagenet-1k",
"arxiv:2205.13147",
"license:mit",
"region:us"
] |
image-classification
| 2023-07-13T03:45:55Z |
---
license: mit
datasets:
- imagenet-1k
language:
- en
metrics:
- accuracy
pipeline_tag: image-classification
tags:
- code
---
# Matryoshka Representation Learning🪆
_Aditya Kusupati*, Gantavya Bhatt*, Aniket Rege*, Matthew Wallingford, Aditya Sinha, Vivek Ramanujan, William Howard-Snyder, Kaifeng Chen, Sham Kakade, Prateek Jain, Ali Farhadi_
GitHub: https://github.com/RAIVNLab/MRL
Arxiv: https://arxiv.org/abs/2205.13147
We provide pretrained models trained with [FFCV](https://github.com/libffcv/ffcv) on ImageNet-1K:
1. `mrl` : ResNet50 __mrl__ models trained with Matryoshka loss (vanilla and efficient) with nesting starting from _d=8_ (default) and _d=16_
2. `fixed-feature` : independently trained ResNet50 baselines at _log(d)_ granularities
3. `resnet-family` : __mrl__ and __ff__ models trained on ResNet18/34/101
## Citation
If you find this project useful in your research, please consider citing:
```
@inproceedings{kusupati2022matryoshka,
title = {Matryoshka Representation Learning},
author = {Kusupati, Aditya and Bhatt, Gantavya and Rege, Aniket and Wallingford, Matthew and Sinha, Aditya and Ramanujan, Vivek and Howard-Snyder, William and Chen, Kaifeng and Kakade, Sham and Jain, Prateek and others},
title = {Matryoshka Representation Learning.},
booktitle = {Advances in Neural Information Processing Systems},
month = {December},
year = {2022},
}
```
|
NasimB/gpt2-concat-all-text-processign-rarity-all-iorder-est-5p5k
|
NasimB
| 2023-07-13T05:53:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T04:18:13Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-all-text-processign-rarity-all-iorder-est-5p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-all-text-processign-rarity-all-iorder-est-5p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7435 | 0.32 | 500 | 5.6693 |
| 5.3983 | 0.63 | 1000 | 5.2259 |
| 5.0552 | 0.95 | 1500 | 4.9848 |
| 4.7718 | 1.27 | 2000 | 4.8394 |
| 4.6329 | 1.58 | 2500 | 4.7145 |
| 4.5322 | 1.9 | 3000 | 4.6174 |
| 4.3187 | 2.22 | 3500 | 4.5659 |
| 4.2361 | 2.53 | 4000 | 4.4971 |
| 4.1996 | 2.85 | 4500 | 4.4287 |
| 4.0309 | 3.17 | 5000 | 4.4140 |
| 3.9128 | 3.48 | 5500 | 4.3761 |
| 3.8993 | 3.8 | 6000 | 4.3344 |
| 3.7784 | 4.12 | 6500 | 4.3363 |
| 3.619 | 4.43 | 7000 | 4.3222 |
| 3.6107 | 4.75 | 7500 | 4.3063 |
| 3.5596 | 5.07 | 8000 | 4.3030 |
| 3.4209 | 5.38 | 8500 | 4.3070 |
| 3.4095 | 5.7 | 9000 | 4.3053 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
YanJiangJerry/SA-tweet-roberta-large-e4-w1-1.5-b16-m4
|
YanJiangJerry
| 2023-07-13T05:42:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T05:19:19Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-tweet-roberta-large-e4-w1-1.5-b16-m4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-tweet-roberta-large-e4-w1-1.5-b16-m4
This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3545
- Accuracy: 0.945
- F1: 0.9511
- Precision: 0.9537
- Recall: 0.9486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 285 | 0.1933 | 0.92 | 0.9290 | 0.9306 | 0.9273 |
| 0.2508 | 2.0 | 570 | 0.2097 | 0.933 | 0.9411 | 0.9337 | 0.9486 |
| 0.2508 | 3.0 | 855 | 0.2958 | 0.937 | 0.9450 | 0.9312 | 0.9592 |
| 0.0947 | 4.0 | 1140 | 0.3545 | 0.945 | 0.9511 | 0.9537 | 0.9486 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ajaydvrj/my-qa-model
|
ajaydvrj
| 2023-07-13T05:31:47Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T05:25:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-qa-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-qa-model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4373
- Train Accuracy: 0.1538
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 2.6280 | 0.0769 | 0 |
| 2.5014 | 0.1538 | 1 |
| 2.5604 | 0.2308 | 2 |
| 2.5289 | 0.0769 | 3 |
| 2.4373 | 0.1538 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
localmodels/Guanaco-65B-GPTQ
|
localmodels
| 2023-07-13T05:21:10Z | 7 | 4 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"arxiv:2305.14314",
"arxiv:2302.13971",
"arxiv:2304.07327",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-28T21:51:04Z |
# Guanaco 65B GPTQ
From: https://huggingface.co/timdettmers/guanaco-65b
---
## Model
* guanaco-65b-4bit.safetensors
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ
* Parameters: Groupsize = None. act-order
---
# Guanaco Models Based on LLaMA
| [Paper](https://arxiv.org/abs/2305.14314) | [Code](https://github.com/artidoro/qlora) | [Demo](https://huggingface.co/spaces/uwnlp/guanaco-playground-tgi) |
**The Guanaco models are open-source finetuned chatbots obtained through 4-bit QLoRA tuning of LLaMA base models on the OASST1 dataset. They are available in 7B, 13B, 33B, and 65B parameter sizes.**
⚠️Guanaco is a model purely intended for research purposes and could produce problematic outputs.
## Why use Guanaco?
- **Competitive with commercial chatbot systems on the Vicuna and OpenAssistant benchmarks** (ChatGPT and BARD) according to human and GPT-4 raters. We note that the relative performance on tasks not covered in these benchmarks could be very different. In addition, commercial systems evolve over time (we used outputs from the March 2023 version of the models).
- **Available open-source for research purposes**. Guanaco models allow *cheap* and *local* experimentation with high-quality chatbot systems.
- **Replicable and efficient training procedure** that can be extended to new use cases. Guanaco training scripts are available in the [QLoRA repo](https://github.com/artidoro/qlora).
- **Rigorous comparison to 16-bit methods** (both 16-bit full-finetuning and LoRA) in [our paper](https://arxiv.org/abs/2305.14314) demonstrates the effectiveness of 4-bit QLoRA finetuning.
- **Lightweight** checkpoints which only contain adapter weights.
## License and Intended Use
Guanaco adapter weights are available under Apache 2 license. Note the use of the Guanaco adapter weights, requires access to the LLaMA model weighs.
Guanaco is based on LLaMA and therefore should be used according to the LLaMA license.
## Usage
Here is an example of how you would load Guanaco 7B in 4-bits:
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4'
),
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
Inference can then be performed as usual with HF models as follows:
```python
prompt = "Introduce yourself"
formatted_prompt = (
f"A chat between a curious human and an artificial intelligence assistant."
f"The assistant gives helpful, detailed, and polite answers to the user's questions.\n"
f"### Human: {prompt} ### Assistant:"
)
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda:0")
outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Expected output similar to the following:
```
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Human: Introduce yourself ### Assistant: I am an artificial intelligence assistant. I am here to help you with any questions you may have.
```
## Current Inference Limitations
Currently, 4-bit inference is slow. We recommend loading in 16 bits if inference speed is a concern. We are actively working on releasing efficient 4-bit inference kernels.
Below is how you would load the model in 16 bits:
```python
model_name = "huggyllama/llama-7b"
adapters_name = 'timdettmers/guanaco-7b'
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
max_memory= {i: '24000MB' for i in range(torch.cuda.device_count())},
)
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Model Card
**Architecture**: The Guanaco models are LoRA adapters to be used on top of LLaMA models. They are added to all layers. For all model sizes, we use $r=64$.
**Base Model**: Guanaco uses LLaMA as base model with sizes 7B, 13B, 33B, 65B. LLaMA is a causal language model pretrained on a large corpus of text. See [LLaMA paper](https://arxiv.org/abs/2302.13971) for more details. Note that Guanaco can inherit biases and limitations of the base model.
**Finetuning Data**: Guanaco is finetuned on OASST1. The exact dataset is available at [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
**Languages**: The OASST1 dataset is multilingual (see [the paper](https://arxiv.org/abs/2304.07327) for details) and as such Guanaco responds to user queries in different languages. We note, however, that OASST1 is heavy in high-resource languages. In addition, human evaluation of Guanaco was only performed in English and based on qualitative analysis we observed degradation in performance in other languages.
Next, we describe Training and Evaluation details.
### Training
Guanaco models are the result of 4-bit QLoRA supervised finetuning on the OASST1 dataset.
All models use NormalFloat4 datatype for the base model and LoRA adapters on all linear layers with BFloat16 as computation datatype. We set LoRA $r=64$, $\alpha=16$. We also use Adam beta2 of 0.999, max grad norm of 0.3 and LoRA dropout of 0.1 for models up to 13B and 0.05 for 33B and 65B models.
For the finetuning process, we use constant learning rate schedule and paged AdamW optimizer.
### Training hyperparameters
Size| Dataset | Batch Size | Learning Rate | Max Steps | Sequence length
---|---|---|---|---|---
7B | OASST1 | 16 | 2e-4 | 1875 | 512
13B | OASST1 | 16 | 2e-4 | 1875 | 512
33B | OASST1 | 16 | 1e-4 | 1875 | 512
65B | OASST1 | 16 | 1e-4 | 1875 | 512
### Evaluation
We test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses. We use the Vicuna and OpenAssistant datasets with 80 and 953 prompts respectively.
In both human and automated evaluations, for each prompt, raters compare all pairs of responses across the models considered. For human raters we randomize the order of the systems, for GPT-4 we evaluate with both orders.
Benchmark | Vicuna | | Vicuna | | OpenAssistant | | -
-----------|----|-----|--------|---|---------------|---|---
Prompts | 80 | | 80 | | 953 | |
Judge | Human | | GPT-4 | | GPT-4 | |
Model | Elo | Rank | Elo | Rank | Elo | Rank | **Median Rank**
GPT-4 | 1176 | 1 | 1348 | 1 | 1294 | 1 | 1
Guanaco-65B | 1023 | 2 | 1022 | 2 | 1008 | 3 | 2
Guanaco-33B | 1009 | 4 | 992 | 3 | 1002 | 4 | 4
ChatGPT-3.5 Turbo | 916 | 7 | 966 | 5 | 1015 | 2 | 5
Vicuna-13B | 984 | 5 | 974 | 4 | 936 | 5 | 5
Guanaco-13B | 975 | 6 | 913 | 6 | 885 | 6 | 6
Guanaco-7B | 1010 | 3 | 879 | 8 | 860 | 7 | 7
Bard | 909 | 8 | 902 | 7 | - | - | 8
We also use the MMLU benchmark to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics, US history, computer science, law, and more. We report 5-shot test accuracy.
Dataset | 7B | 13B | 33B | 65B
---|---|---|---|---
LLaMA no tuning | 35.1 | 46.9 | 57.8 | 63.4
Self-Instruct | 36.4 | 33.3 | 53.0 | 56.7
Longform | 32.1 | 43.2 | 56.6 | 59.7
Chip2 | 34.5 | 41.6 | 53.6 | 59.8
HH-RLHF | 34.9 | 44.6 | 55.8 | 60.1
Unnatural Instruct | 41.9 | 48.1 | 57.3 | 61.3
OASST1 (Guanaco) | 36.6 | 46.4 | 57.0 | 62.2
Alpaca | 38.8 | 47.8 | 57.3 | 62.5
FLAN v2 | 44.5 | 51.4 | 59.2 | 63.9
## Risks and Biases
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. The model was trained on various public datasets; it is possible that this model could generate lewd, biased, or otherwise offensive outputs.
However, we note that finetuning on OASST1 seems to reduce biases as measured on the CrowS dataset. We report here the performance of Guanaco-65B compared to other baseline models on the CrowS dataset.
| | LLaMA-65B | GPT-3 | OPT-175B | Guanaco-65B |
|----------------------|-----------|-------|----------|---------------|
| Gender | 70.6 | 62.6 | 65.7 | **47.5** |
| Religion | {79.0} | 73.3 | 68.6 | **38.7** |
| Race/Color | 57.0 | 64.7 | 68.6 | **45.3** |
| Sexual orientation | {81.0} | 76.2 | 78.6 | **59.1** |
| Age | 70.1 | 64.4 | 67.8 | **36.3** |
| Nationality | 64.2 | 61.6 | 62.9 | **32.4** |
| Disability | 66.7 | 76.7 | 76.7 | **33.9** |
| Physical appearance | 77.8 | 74.6 | 76.2 | **43.1** |
| Socioeconomic status | 71.5 | 73.8 | 76.2 | **55.3** |
| Average | 66.6 | 67.2 | 69.5 | **43.5** |
## Citation
```bibtex
@article{dettmers2023qlora,
title={QLoRA: Efficient Finetuning of Quantized LLMs},
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
journal={arXiv preprint arXiv:2305.14314},
year={2023}
}
```
|
HoaAn2003/ppo-LunarLander-v2
|
HoaAn2003
| 2023-07-13T05:06:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T05:06:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.13 +/- 20.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vivek22/vivek
|
vivek22
| 2023-07-13T05:05:38Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"en",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-12T12:30:41Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
language:
- en
pipeline_tag: text-to-image
---
# LoRA text2image fine-tuning - vivek22/vivek
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the vivek22/randm dataset.
|
Treeizard/tactile_generator
|
Treeizard
| 2023-07-13T04:57:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-13T04:57:07Z |
---
license: creativeml-openrail-m
---
|
FelixChao/falcon-7b-instruct-ft-adapters-ESG-chatting
|
FelixChao
| 2023-07-13T04:55:48Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-13T04:55:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
YanJiangJerry/SA-tweet-roberta-large-e4-w1-1.5-b16
|
YanJiangJerry
| 2023-07-13T04:53:22Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T04:17:05Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: SA-tweet-roberta-large-e4-w1-1.5-b16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA-tweet-roberta-large-e4-w1-1.5-b16
This model is a fine-tuned version of [Amalq/autotrain-smm4h_large_roberta_clean-874027878](https://huggingface.co/Amalq/autotrain-smm4h_large_roberta_clean-874027878) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6396
- Accuracy: 0.9166
- F1: 0.8872
- Precision: 0.8939
- Recall: 0.8806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2895 | 1.0 | 581 | 0.4026 | 0.9110 | 0.8806 | 0.8806 | 0.8806 |
| 0.1182 | 2.0 | 1162 | 0.6190 | 0.9110 | 0.8754 | 0.9153 | 0.8388 |
| 0.0589 | 3.0 | 1743 | 0.6167 | 0.9155 | 0.8838 | 0.9060 | 0.8627 |
| 0.0211 | 4.0 | 2324 | 0.6396 | 0.9166 | 0.8872 | 0.8939 | 0.8806 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ui-chope/distilbert-base-uncased
|
ui-chope
| 2023-07-13T04:52:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-05T01:45:44Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1298
- Precision: 0.9739
- Recall: 0.9617
- F1: 0.9678
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0218 | 1.0 | 5296 | 0.0828 | 0.9609 | 0.9609 | 0.9609 | 0.9842 |
| 0.0159 | 2.0 | 10592 | 0.1135 | 0.9677 | 0.9602 | 0.9639 | 0.9820 |
| 0.0137 | 3.0 | 15888 | 0.0846 | 0.9631 | 0.9570 | 0.9600 | 0.9831 |
| 0.0074 | 4.0 | 21184 | 0.1179 | 0.9621 | 0.9523 | 0.9572 | 0.9804 |
| 0.0058 | 5.0 | 26480 | 0.1080 | 0.9763 | 0.9664 | 0.9713 | 0.9857 |
| 0.0056 | 6.0 | 31776 | 0.1273 | 0.9685 | 0.9594 | 0.9639 | 0.9828 |
| 0.0055 | 7.0 | 37072 | 0.1451 | 0.9637 | 0.9531 | 0.9584 | 0.9800 |
| 0.0035 | 8.0 | 42368 | 0.1345 | 0.9707 | 0.9563 | 0.9634 | 0.9805 |
| 0.0027 | 9.0 | 47664 | 0.1242 | 0.9739 | 0.9633 | 0.9686 | 0.9852 |
| 0.0018 | 10.0 | 52960 | 0.1232 | 0.9739 | 0.9633 | 0.9686 | 0.9844 |
| 0.0017 | 11.0 | 58256 | 0.1298 | 0.9739 | 0.9617 | 0.9678 | 0.9837 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
localmodels/Vicuna-7B-v1.3-GPTQ
|
localmodels
| 2023-07-13T04:47:45Z | 15 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"arxiv:2302.13971",
"arxiv:2306.05685",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T04:47:41Z |
---
duplicated_from: localmodels/LLM
---
# Vicuna 7B v1.3 GPTQ
From LMSYS: https://huggingface.co/lmsys/vicuna-7b-v1.3
---
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| vicuna-7b-v1.3-GPTQ-4bit-128g.no-act.order | 4 | 128 | False | 4.00 GB | True | GPTQ-for-LLaMa | Most compatible. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. |
---
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 140K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
kazuhidet/norurun
|
kazuhidet
| 2023-07-13T04:23:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T04:06:49Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of mascot norurun
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - kazuhidet/norurun
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of mascot norurun using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
AnirbanRC/flan_t5_small_finetuned_anirbanrc
|
AnirbanRC
| 2023-07-13T04:12:54Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-13T04:03:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan_t5_small_finetuned_anirbanrc
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: train[:50]
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 43.2639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan_t5_small_finetuned_anirbanrc
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5172
- Rouge1: 43.2639
- Rouge2: 20.726
- Rougel: 37.0774
- Rougelsum: 39.6232
- Gen Len: 16.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 7 | 1.6379 | 42.0058 | 18.6227 | 35.3019 | 38.6413 | 17.36 |
| No log | 2.0 | 14 | 1.5869 | 43.938 | 20.3595 | 36.876 | 40.0421 | 17.14 |
| No log | 3.0 | 21 | 1.5483 | 43.3723 | 20.3935 | 36.9286 | 39.6476 | 17.0 |
| No log | 4.0 | 28 | 1.5255 | 43.9774 | 21.5464 | 37.8954 | 40.5009 | 16.9 |
| No log | 5.0 | 35 | 1.5172 | 43.2639 | 20.726 | 37.0774 | 39.6232 | 16.92 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
abbiezz/tomuntitled
|
abbiezz
| 2023-07-13T04:12:40Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-13T04:06:35Z |
---
license: openrail
---
https://drive.google.com/file/d/1qilU9BEfX7RY8q9Uohesz9qQa0R_B5PW/view?usp=drive_link
|
quyc/picture
|
quyc
| 2023-07-13T03:51:01Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"image-to-text",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-07-12T07:05:17Z |
---
pipeline_tag: image-to-text
---
|
nic1122/roberta-base-finetuned-cluener2020-chinese
|
nic1122
| 2023-07-13T03:50:31Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-13T03:09:03Z |
源自:https://huggingface.co/uer/roberta-base-finetuned-cluener2020-chinese
做了IOB适配,适用于elasticsearch ml
|
rdyzakya/IndoLEGO-ABSA
|
rdyzakya
| 2023-07-13T03:43:17Z | 113 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"id",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-12T13:28:26Z |
---
language:
- id
metrics:
- f1
pipeline_tag: text2text-generation
---
|
NasimB/gpt2-concat-all-base-rarity-all-iorder-est-5p5k
|
NasimB
| 2023-07-13T03:30:43Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T01:51:53Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-all-base-rarity-all-iorder-est-5p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-all-base-rarity-all-iorder-est-5p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3322
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7625 | 0.31 | 500 | 5.6584 |
| 5.4053 | 0.63 | 1000 | 5.2182 |
| 5.0653 | 0.94 | 1500 | 4.9736 |
| 4.7706 | 1.25 | 2000 | 4.8109 |
| 4.6273 | 1.56 | 2500 | 4.6831 |
| 4.5134 | 1.88 | 3000 | 4.5789 |
| 4.3042 | 2.19 | 3500 | 4.5166 |
| 4.2107 | 2.5 | 4000 | 4.4533 |
| 4.1747 | 2.82 | 4500 | 4.3963 |
| 4.0257 | 3.13 | 5000 | 4.3718 |
| 3.8934 | 3.44 | 5500 | 4.3419 |
| 3.8694 | 3.75 | 6000 | 4.3086 |
| 3.7894 | 4.07 | 6500 | 4.2941 |
| 3.5908 | 4.38 | 7000 | 4.2908 |
| 3.586 | 4.69 | 7500 | 4.2727 |
| 3.5713 | 5.01 | 8000 | 4.2605 |
| 3.3959 | 5.32 | 8500 | 4.2717 |
| 3.3922 | 5.63 | 9000 | 4.2700 |
| 3.3874 | 5.94 | 9500 | 4.2690 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
keehun/textual_inversion_los_char
|
keehun
| 2023-07-13T03:23:01Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-13T02:55:30Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - keehun/textual_inversion_los_char
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
anirbankgec/flan_t5_small_finetuned
|
anirbankgec
| 2023-07-13T03:17:06Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-13T02:26:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: flan_t5_small_finetuned
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: train[:50]
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 44.9038
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan_t5_small_finetuned
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3412
- Rouge1: 44.9038
- Rouge2: 22.7667
- Rougel: 38.8789
- Rougelsum: 41.7196
- Gen Len: 16.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 7 | 1.4537 | 43.8958 | 21.4426 | 37.7876 | 41.0621 | 17.4 |
| No log | 2.0 | 14 | 1.4025 | 44.291 | 21.7752 | 37.6394 | 41.0092 | 17.1 |
| No log | 3.0 | 21 | 1.3687 | 44.5803 | 22.5489 | 38.5959 | 41.6202 | 17.0 |
| No log | 4.0 | 28 | 1.3487 | 44.6139 | 22.5884 | 38.6786 | 41.5311 | 16.94 |
| No log | 5.0 | 35 | 1.3412 | 44.9038 | 22.7667 | 38.8789 | 41.7196 | 16.92 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sd-dreambooth-library/HairDye
|
sd-dreambooth-library
| 2023-07-13T03:05:23Z | 31 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-03T01:16:40Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: daizky1
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
---
### Hair Dye Dreambooth model
[Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb).
### TERMS OF SERVICE:
# No Selling Models
# Merge with CREDIT
VAE is not required but is fun.
I am not responsible for what you make.
If this model bites you call the CIA.
### Codeword:
daizky1 (use that on your prompt)
|
Hedayat-Abrishami/rl_course_vizdoom_health_gathering_supreme
|
Hedayat-Abrishami
| 2023-07-13T02:54:34Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T01:42:33Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.83 +/- 5.92
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Hedayat-Abrishami/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
yuean/my_resnet50_model
|
yuean
| 2023-07-13T02:41:43Z | 249 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"dataset:yuean/EuroSAT-2750",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-12T05:55:17Z |
---
metrics:
- accuracy
pipeline_tag: image-classification
datasets:
- yuean/EuroSAT-2750
---
|
duyvt6663/RoBERTa_Subj
|
duyvt6663
| 2023-07-13T02:39:04Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-13T02:04:23Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: RoBERTa_Subj
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RoBERTa_Subj
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0234
- Validation Loss: 0.1242
- Train Accuracy: 0.965
- Epoch: 3
## Model description
This model sucks, please don't use
## Intended uses & limitations
More information needed
## Training and evaluation data
The model was trained on the SetFit/Subj dataset.
The dataset, I believe, contains plot lines from movies and reviews. The former are all labeled as "objective" while the latter "subjective".
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1849 | 0.1106 | 0.9595 | 0 |
| 0.0652 | 0.1081 | 0.9625 | 1 |
| 0.0234 | 0.1242 | 0.965 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
chungkaley/sd-class-butterflies-32
|
chungkaley
| 2023-07-13T02:16:58Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-07-13T02:16:30Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('chungkaley/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
VitCon/poca-SoccerTwos
|
VitCon
| 2023-07-13T02:16:00Z | 39 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-13T02:15:11Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: VitCon/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ketong3906/opus-mt-zh-en-finetuned-chn-to-eng
|
ketong3906
| 2023-07-13T02:04:29Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-07T03:06:50Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-zh-en-finetuned-chn-to-eng
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-zh-en-finetuned-chn-to-eng
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-zh-en](https://huggingface.co/Helsinki-NLP/opus-mt-zh-en) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-----:|:-------:|
| No log | 1.0 | 8 | 5.2454 | 0.346 | 66.3438 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
digitalmax1/max
|
digitalmax1
| 2023-07-13T01:59:47Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"art",
"code",
"legal",
"conversational",
"ar",
"dataset:Open-Orca/OpenOrca",
"license:bigscience-bloom-rail-1.0",
"region:us"
] |
text-generation
| 2023-07-13T01:54:08Z |
---
license: bigscience-bloom-rail-1.0
datasets:
- Open-Orca/OpenOrca
language:
- ar
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: conversational
tags:
- art
- code
- legal
---
|
NasimB/gpt2-concat-all-ind-txt-processing-indv-rarity-all
|
NasimB
| 2023-07-13T01:10:16Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T23:19:57Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-all-ind-txt-processing-indv-rarity-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-all-ind-txt-processing-indv-rarity-all
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8601
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7251 | 0.32 | 500 | 5.7246 |
| 5.3996 | 0.63 | 1000 | 5.3546 |
| 5.0679 | 0.95 | 1500 | 5.1410 |
| 4.7966 | 1.26 | 2000 | 5.0632 |
| 4.6756 | 1.58 | 2500 | 4.9923 |
| 4.5763 | 1.89 | 3000 | 4.9037 |
| 4.3886 | 2.21 | 3500 | 4.8967 |
| 4.3114 | 2.52 | 4000 | 4.8354 |
| 4.271 | 2.84 | 4500 | 4.8159 |
| 4.1222 | 3.15 | 5000 | 4.8393 |
| 4.0013 | 3.47 | 5500 | 4.8138 |
| 3.9867 | 3.78 | 6000 | 4.7791 |
| 3.8843 | 4.1 | 6500 | 4.7971 |
| 3.7123 | 4.41 | 7000 | 4.7975 |
| 3.7049 | 4.73 | 7500 | 4.7909 |
| 3.6663 | 5.04 | 8000 | 4.8015 |
| 3.5174 | 5.36 | 8500 | 4.8131 |
| 3.5098 | 5.67 | 9000 | 4.8154 |
| 3.5064 | 5.99 | 9500 | 4.8154 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
tbooy/Taxi-v3
|
tbooy
| 2023-07-13T00:58:52Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T00:58:41Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tbooy/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/gpt2-concat-all-indv-rarity-all-no-cut
|
NasimB
| 2023-07-13T00:56:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T22:49:52Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-all-indv-rarity-all-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-all-indv-rarity-all-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7423 | 0.31 | 500 | 5.7056 |
| 5.4151 | 0.62 | 1000 | 5.3509 |
| 5.0793 | 0.93 | 1500 | 5.1405 |
| 4.8005 | 1.25 | 2000 | 5.0365 |
| 4.6822 | 1.56 | 2500 | 4.9729 |
| 4.5889 | 1.87 | 3000 | 4.8875 |
| 4.4127 | 2.18 | 3500 | 4.8860 |
| 4.321 | 2.49 | 4000 | 4.8474 |
| 4.291 | 2.8 | 4500 | 4.8098 |
| 4.1671 | 3.12 | 5000 | 4.8175 |
| 4.03 | 3.43 | 5500 | 4.7896 |
| 4.0205 | 3.74 | 6000 | 4.7708 |
| 3.9611 | 4.05 | 6500 | 4.7766 |
| 3.7474 | 4.36 | 7000 | 4.7885 |
| 3.7534 | 4.67 | 7500 | 4.7768 |
| 3.7348 | 4.98 | 8000 | 4.7623 |
| 3.505 | 5.3 | 8500 | 4.8038 |
| 3.4914 | 5.61 | 9000 | 4.8014 |
| 3.4892 | 5.92 | 9500 | 4.8054 |
| 3.3732 | 6.23 | 10000 | 4.8209 |
| 3.3408 | 6.54 | 10500 | 4.8276 |
| 3.3367 | 6.85 | 11000 | 4.8293 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
tbooy/q-FrozenLake-v1-4x4-noSlippery
|
tbooy
| 2023-07-13T00:56:16Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T00:56:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tbooy/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Hedayat-Abrishami/LunarLander-v2-chap8
|
Hedayat-Abrishami
| 2023-07-13T00:03:40Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-13T00:03:34Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -159.79 +/- 104.91
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'Name'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Hedayat-Abrishami/LunarLander-v2-chap8'
'batch_size': 512
'minibatch_size': 128}
```
|
Hedayat-Abrishami/ppo-CartPole-v1
|
Hedayat-Abrishami
| 2023-07-12T23:58:20Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T23:51:42Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 223.00 +/- 113.45
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
# Hyperparameters
```python
{'exp_name': 'Name'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Hedayat-Abrishami/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
rohn132/ppo-LunarLander-v2
|
rohn132
| 2023-07-12T23:58:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T23:57:59Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.99 +/- 28.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Blu72/Falcon
|
Blu72
| 2023-07-12T23:48:51Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-07-12T23:48:30Z |
---
license: openrail
---
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-40b")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-40b")
|
hatemilkins/Kanamori-Sayaka
|
hatemilkins
| 2023-07-12T23:41:58Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2023-07-12T23:37:43Z |
---
license: cc-by-nc-nd-4.0
---
|
SHENMU007/neunit_BASE_V12.5
|
SHENMU007
| 2023-07-12T23:39:19Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-12T20:39:15Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ayanban011/vit-base_tobacco_bs_16_lr_1e-5_e_200_wr_0.05_wd_0.4_split
|
ayanban011
| 2023-07-12T23:33:16Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-12T18:58:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_tobacco_bs_16_lr_1e-5_e_200_wr_0.05_wd_0.4_split
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_tobacco_bs_16_lr_1e-5_e_200_wr_0.05_wd_0.4_split
This model is a fine-tuned version of [jordyvl/vit-base_tobacco](https://huggingface.co/jordyvl/vit-base_tobacco) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0411
- Accuracy: 0.8333
- Brier Loss: 0.3084
- Nll: 1.3568
- F1 Micro: 0.8333
- F1 Macro: 0.8183
- Ece: 0.1563
- Aurc: 0.0847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 0.99 | 43 | 0.7544 | 0.7960 | 0.3088 | 1.3391 | 0.7960 | 0.7715 | 0.1991 | 0.0817 |
| No log | 2.0 | 87 | 0.7158 | 0.8218 | 0.2920 | 1.1888 | 0.8218 | 0.7941 | 0.1863 | 0.0741 |
| No log | 2.98 | 130 | 0.7144 | 0.7989 | 0.2932 | 1.2958 | 0.7989 | 0.7701 | 0.1628 | 0.0749 |
| No log | 3.99 | 174 | 0.6762 | 0.8305 | 0.2749 | 1.1916 | 0.8305 | 0.8076 | 0.1844 | 0.0678 |
| No log | 4.98 | 217 | 0.6710 | 0.8362 | 0.2745 | 1.0739 | 0.8362 | 0.8076 | 0.1696 | 0.0664 |
| No log | 5.99 | 261 | 0.6532 | 0.8362 | 0.2675 | 1.0011 | 0.8362 | 0.8115 | 0.1750 | 0.0602 |
| No log | 6.98 | 304 | 0.6404 | 0.8362 | 0.2635 | 1.0072 | 0.8362 | 0.8106 | 0.1714 | 0.0633 |
| No log | 7.99 | 348 | 0.6635 | 0.8218 | 0.2707 | 1.0903 | 0.8218 | 0.8030 | 0.1513 | 0.0770 |
| No log | 9.0 | 392 | 0.6167 | 0.8420 | 0.2534 | 1.0176 | 0.8420 | 0.8259 | 0.1613 | 0.0796 |
| No log | 9.99 | 435 | 0.6496 | 0.8276 | 0.2703 | 0.9646 | 0.8276 | 0.8085 | 0.1643 | 0.0588 |
| No log | 11.0 | 479 | 0.6091 | 0.8506 | 0.2467 | 1.1036 | 0.8506 | 0.8308 | 0.1483 | 0.0650 |
| 0.4309 | 11.98 | 522 | 0.6075 | 0.8420 | 0.2483 | 0.9144 | 0.8420 | 0.8246 | 0.1391 | 0.0519 |
| 0.4309 | 12.99 | 566 | 0.6164 | 0.8276 | 0.2576 | 0.9703 | 0.8276 | 0.8092 | 0.1467 | 0.0645 |
| 0.4309 | 13.98 | 609 | 0.5893 | 0.8592 | 0.2347 | 1.1493 | 0.8592 | 0.8483 | 0.1347 | 0.0715 |
| 0.4309 | 14.99 | 653 | 0.6123 | 0.8477 | 0.2485 | 1.1889 | 0.8477 | 0.8232 | 0.1587 | 0.0764 |
| 0.4309 | 16.0 | 697 | 0.6352 | 0.8420 | 0.2615 | 1.1999 | 0.8420 | 0.8403 | 0.1368 | 0.0668 |
| 0.4309 | 16.99 | 740 | 0.6329 | 0.8333 | 0.2625 | 1.1748 | 0.8333 | 0.8249 | 0.1267 | 0.0744 |
| 0.4309 | 18.0 | 784 | 0.6350 | 0.8448 | 0.2590 | 1.2154 | 0.8448 | 0.8386 | 0.1423 | 0.0688 |
| 0.4309 | 18.98 | 827 | 0.5892 | 0.8592 | 0.2383 | 1.1001 | 0.8592 | 0.8515 | 0.1293 | 0.0630 |
| 0.4309 | 19.99 | 871 | 0.5981 | 0.8477 | 0.2476 | 1.0104 | 0.8477 | 0.8375 | 0.1345 | 0.0630 |
| 0.4309 | 20.98 | 914 | 0.6484 | 0.8420 | 0.2642 | 1.3553 | 0.8420 | 0.8292 | 0.1490 | 0.0770 |
| 0.4309 | 21.99 | 958 | 0.6298 | 0.8305 | 0.2657 | 1.1220 | 0.8305 | 0.8208 | 0.1292 | 0.0670 |
| 0.1285 | 22.98 | 1001 | 0.6325 | 0.8391 | 0.2633 | 1.2549 | 0.8391 | 0.8362 | 0.1328 | 0.0708 |
| 0.1285 | 23.99 | 1045 | 0.6032 | 0.8534 | 0.2486 | 1.1258 | 0.8534 | 0.8444 | 0.1229 | 0.0706 |
| 0.1285 | 25.0 | 1089 | 0.6080 | 0.8534 | 0.2460 | 1.2033 | 0.8534 | 0.8414 | 0.1257 | 0.0755 |
| 0.1285 | 25.99 | 1132 | 0.6321 | 0.8391 | 0.2667 | 1.2242 | 0.8391 | 0.8355 | 0.1349 | 0.0697 |
| 0.1285 | 27.0 | 1176 | 0.6325 | 0.8592 | 0.2522 | 1.2029 | 0.8592 | 0.8493 | 0.1278 | 0.0778 |
| 0.1285 | 27.98 | 1219 | 0.6585 | 0.8534 | 0.2546 | 1.3669 | 0.8534 | 0.8378 | 0.1368 | 0.0890 |
| 0.1285 | 28.99 | 1263 | 0.6302 | 0.8563 | 0.2517 | 1.2419 | 0.8563 | 0.8508 | 0.1294 | 0.0751 |
| 0.1285 | 29.98 | 1306 | 0.6663 | 0.8477 | 0.2637 | 1.4132 | 0.8477 | 0.8339 | 0.1399 | 0.0828 |
| 0.1285 | 30.99 | 1350 | 0.7063 | 0.8362 | 0.2799 | 1.4323 | 0.8362 | 0.8330 | 0.1441 | 0.0863 |
| 0.1285 | 32.0 | 1394 | 0.6564 | 0.8506 | 0.2570 | 1.1583 | 0.8506 | 0.8417 | 0.1358 | 0.0847 |
| 0.1285 | 32.99 | 1437 | 0.6738 | 0.8477 | 0.2647 | 1.3855 | 0.8477 | 0.8398 | 0.1305 | 0.0775 |
| 0.1285 | 34.0 | 1481 | 0.6528 | 0.8563 | 0.2559 | 1.2601 | 0.8563 | 0.8462 | 0.1310 | 0.0789 |
| 0.0385 | 34.98 | 1524 | 0.6534 | 0.8563 | 0.2537 | 1.2931 | 0.8563 | 0.8461 | 0.1241 | 0.0773 |
| 0.0385 | 35.99 | 1568 | 0.6541 | 0.8534 | 0.2525 | 1.2589 | 0.8534 | 0.8449 | 0.1315 | 0.0833 |
| 0.0385 | 36.98 | 1611 | 0.6769 | 0.8592 | 0.2545 | 1.4351 | 0.8592 | 0.8492 | 0.1242 | 0.0792 |
| 0.0385 | 37.99 | 1655 | 0.6824 | 0.8592 | 0.2576 | 1.2241 | 0.8592 | 0.8472 | 0.1327 | 0.0810 |
| 0.0385 | 38.98 | 1698 | 0.6843 | 0.8563 | 0.2589 | 1.3394 | 0.8563 | 0.8450 | 0.1311 | 0.0802 |
| 0.0385 | 39.99 | 1742 | 0.6964 | 0.8506 | 0.2630 | 1.2625 | 0.8506 | 0.8405 | 0.1310 | 0.0789 |
| 0.0385 | 41.0 | 1786 | 0.7051 | 0.8534 | 0.2671 | 1.3296 | 0.8534 | 0.8434 | 0.1353 | 0.0794 |
| 0.0385 | 41.99 | 1829 | 0.7006 | 0.8506 | 0.2645 | 1.2965 | 0.8506 | 0.8400 | 0.1373 | 0.0796 |
| 0.0385 | 43.0 | 1873 | 0.7054 | 0.8563 | 0.2646 | 1.2973 | 0.8563 | 0.8450 | 0.1313 | 0.0790 |
| 0.0385 | 43.98 | 1916 | 0.7143 | 0.8506 | 0.2673 | 1.2640 | 0.8506 | 0.8399 | 0.1359 | 0.0803 |
| 0.0385 | 44.99 | 1960 | 0.7168 | 0.8534 | 0.2665 | 1.3058 | 0.8534 | 0.8429 | 0.1389 | 0.0820 |
| 0.0206 | 45.98 | 2003 | 0.7204 | 0.8506 | 0.2669 | 1.3009 | 0.8506 | 0.8384 | 0.1336 | 0.0805 |
| 0.0206 | 46.99 | 2047 | 0.7265 | 0.8534 | 0.2683 | 1.2633 | 0.8534 | 0.8415 | 0.1319 | 0.0806 |
| 0.0206 | 48.0 | 2091 | 0.7311 | 0.8506 | 0.2695 | 1.2725 | 0.8506 | 0.8396 | 0.1372 | 0.0811 |
| 0.0206 | 48.99 | 2134 | 0.7384 | 0.8477 | 0.2729 | 1.3385 | 0.8477 | 0.8364 | 0.1387 | 0.0807 |
| 0.0206 | 50.0 | 2178 | 0.7383 | 0.8534 | 0.2695 | 1.1951 | 0.8534 | 0.8406 | 0.1344 | 0.0827 |
| 0.0206 | 50.98 | 2221 | 0.7440 | 0.8506 | 0.2740 | 1.3360 | 0.8506 | 0.8394 | 0.1418 | 0.0812 |
| 0.0206 | 51.99 | 2265 | 0.7455 | 0.8506 | 0.2727 | 1.2704 | 0.8506 | 0.8388 | 0.1351 | 0.0816 |
| 0.0206 | 52.98 | 2308 | 0.7474 | 0.8506 | 0.2708 | 1.2622 | 0.8506 | 0.8384 | 0.1334 | 0.0823 |
| 0.0206 | 53.99 | 2352 | 0.7581 | 0.8477 | 0.2750 | 1.3446 | 0.8477 | 0.8374 | 0.1406 | 0.0826 |
| 0.0206 | 54.98 | 2395 | 0.7571 | 0.8477 | 0.2751 | 1.3703 | 0.8477 | 0.8363 | 0.1378 | 0.0814 |
| 0.0206 | 55.99 | 2439 | 0.7618 | 0.8477 | 0.2752 | 1.3702 | 0.8477 | 0.8363 | 0.1363 | 0.0827 |
| 0.0206 | 57.0 | 2483 | 0.7638 | 0.8477 | 0.2749 | 1.3774 | 0.8477 | 0.8363 | 0.1394 | 0.0819 |
| 0.0135 | 57.99 | 2526 | 0.7693 | 0.8477 | 0.2760 | 1.3370 | 0.8477 | 0.8363 | 0.1378 | 0.0824 |
| 0.0135 | 59.0 | 2570 | 0.7724 | 0.8448 | 0.2779 | 1.3710 | 0.8448 | 0.8344 | 0.1431 | 0.0823 |
| 0.0135 | 59.98 | 2613 | 0.7780 | 0.8477 | 0.2784 | 1.3328 | 0.8477 | 0.8363 | 0.1463 | 0.0828 |
| 0.0135 | 60.99 | 2657 | 0.7818 | 0.8477 | 0.2795 | 1.3289 | 0.8477 | 0.8363 | 0.1466 | 0.0828 |
| 0.0135 | 61.98 | 2700 | 0.7847 | 0.8420 | 0.2805 | 1.3308 | 0.8420 | 0.8308 | 0.1418 | 0.0830 |
| 0.0135 | 62.99 | 2744 | 0.7851 | 0.8448 | 0.2782 | 1.3650 | 0.8448 | 0.8344 | 0.1411 | 0.0834 |
| 0.0135 | 64.0 | 2788 | 0.7925 | 0.8420 | 0.2829 | 1.4383 | 0.8420 | 0.8319 | 0.1425 | 0.0821 |
| 0.0135 | 64.99 | 2831 | 0.7959 | 0.8448 | 0.2826 | 1.4130 | 0.8448 | 0.8353 | 0.1431 | 0.0826 |
| 0.0135 | 66.0 | 2875 | 0.7989 | 0.8420 | 0.2821 | 1.4040 | 0.8420 | 0.8285 | 0.1446 | 0.0833 |
| 0.0135 | 66.98 | 2918 | 0.7996 | 0.8477 | 0.2807 | 1.3296 | 0.8477 | 0.8363 | 0.1464 | 0.0837 |
| 0.0135 | 67.99 | 2962 | 0.8042 | 0.8448 | 0.2824 | 1.3637 | 0.8448 | 0.8344 | 0.1434 | 0.0837 |
| 0.0097 | 68.98 | 3005 | 0.8095 | 0.8391 | 0.2845 | 1.3635 | 0.8391 | 0.8275 | 0.1468 | 0.0835 |
| 0.0097 | 69.99 | 3049 | 0.8073 | 0.8448 | 0.2824 | 1.3640 | 0.8448 | 0.8344 | 0.1413 | 0.0833 |
| 0.0097 | 70.98 | 3092 | 0.8140 | 0.8477 | 0.2834 | 1.3617 | 0.8477 | 0.8363 | 0.1444 | 0.0837 |
| 0.0097 | 71.99 | 3136 | 0.8152 | 0.8420 | 0.2842 | 1.4009 | 0.8420 | 0.8277 | 0.1439 | 0.0840 |
| 0.0097 | 73.0 | 3180 | 0.8163 | 0.8391 | 0.2858 | 1.4029 | 0.8391 | 0.8246 | 0.1482 | 0.0836 |
| 0.0097 | 73.99 | 3223 | 0.8192 | 0.8391 | 0.2844 | 1.3644 | 0.8391 | 0.8240 | 0.1475 | 0.0843 |
| 0.0097 | 75.0 | 3267 | 0.8225 | 0.8448 | 0.2836 | 1.3593 | 0.8448 | 0.8344 | 0.1473 | 0.0847 |
| 0.0097 | 75.98 | 3310 | 0.8267 | 0.8362 | 0.2859 | 1.3642 | 0.8362 | 0.8207 | 0.1473 | 0.0840 |
| 0.0097 | 76.99 | 3354 | 0.8275 | 0.8391 | 0.2847 | 1.3618 | 0.8391 | 0.8240 | 0.1450 | 0.0849 |
| 0.0097 | 77.98 | 3397 | 0.8325 | 0.8362 | 0.2879 | 1.3686 | 0.8362 | 0.8207 | 0.1491 | 0.0843 |
| 0.0097 | 78.99 | 3441 | 0.8389 | 0.8448 | 0.2885 | 1.3629 | 0.8448 | 0.8329 | 0.1504 | 0.0833 |
| 0.0097 | 80.0 | 3485 | 0.8420 | 0.8420 | 0.2887 | 1.3610 | 0.8420 | 0.8261 | 0.1458 | 0.0837 |
| 0.0073 | 80.99 | 3528 | 0.8452 | 0.8362 | 0.2900 | 1.4064 | 0.8362 | 0.8221 | 0.1488 | 0.0833 |
| 0.0073 | 82.0 | 3572 | 0.8492 | 0.8362 | 0.2898 | 1.4076 | 0.8362 | 0.8221 | 0.1500 | 0.0837 |
| 0.0073 | 82.98 | 3615 | 0.8478 | 0.8362 | 0.2895 | 1.3609 | 0.8362 | 0.8207 | 0.1485 | 0.0847 |
| 0.0073 | 83.99 | 3659 | 0.8483 | 0.8391 | 0.2880 | 1.3622 | 0.8391 | 0.8243 | 0.1480 | 0.0842 |
| 0.0073 | 84.98 | 3702 | 0.8534 | 0.8420 | 0.2892 | 1.3609 | 0.8420 | 0.8261 | 0.1468 | 0.0843 |
| 0.0073 | 85.99 | 3746 | 0.8547 | 0.8333 | 0.2898 | 1.4028 | 0.8333 | 0.8186 | 0.1513 | 0.0846 |
| 0.0073 | 86.98 | 3789 | 0.8618 | 0.8391 | 0.2906 | 1.3597 | 0.8391 | 0.8243 | 0.1445 | 0.0846 |
| 0.0073 | 87.99 | 3833 | 0.8594 | 0.8420 | 0.2885 | 1.3265 | 0.8420 | 0.8311 | 0.1462 | 0.0848 |
| 0.0073 | 89.0 | 3877 | 0.8669 | 0.8391 | 0.2911 | 1.3592 | 0.8391 | 0.8243 | 0.1471 | 0.0843 |
| 0.0073 | 89.99 | 3920 | 0.8664 | 0.8391 | 0.2901 | 1.3597 | 0.8391 | 0.8243 | 0.1468 | 0.0852 |
| 0.0073 | 91.0 | 3964 | 0.8678 | 0.8420 | 0.2905 | 1.3253 | 0.8420 | 0.8296 | 0.1462 | 0.0854 |
| 0.0057 | 91.98 | 4007 | 0.8719 | 0.8391 | 0.2909 | 1.3585 | 0.8391 | 0.8243 | 0.1475 | 0.0853 |
| 0.0057 | 92.99 | 4051 | 0.8768 | 0.8391 | 0.2930 | 1.3595 | 0.8391 | 0.8243 | 0.1493 | 0.0852 |
| 0.0057 | 93.98 | 4094 | 0.8785 | 0.8333 | 0.2928 | 1.4034 | 0.8333 | 0.8203 | 0.1529 | 0.0849 |
| 0.0057 | 94.99 | 4138 | 0.8859 | 0.8333 | 0.2942 | 1.3684 | 0.8333 | 0.8183 | 0.1543 | 0.0844 |
| 0.0057 | 96.0 | 4182 | 0.8839 | 0.8362 | 0.2937 | 1.3597 | 0.8362 | 0.8221 | 0.1497 | 0.0852 |
| 0.0057 | 96.99 | 4225 | 0.8864 | 0.8333 | 0.2940 | 1.4012 | 0.8333 | 0.8203 | 0.1532 | 0.0850 |
| 0.0057 | 98.0 | 4269 | 0.8879 | 0.8362 | 0.2941 | 1.3607 | 0.8362 | 0.8221 | 0.1504 | 0.0849 |
| 0.0057 | 98.98 | 4312 | 0.8921 | 0.8333 | 0.2954 | 1.3609 | 0.8333 | 0.8183 | 0.1521 | 0.0851 |
| 0.0057 | 99.99 | 4356 | 0.8949 | 0.8391 | 0.2945 | 1.3575 | 0.8391 | 0.8243 | 0.1491 | 0.0854 |
| 0.0057 | 100.98 | 4399 | 0.8945 | 0.8362 | 0.2945 | 1.3591 | 0.8362 | 0.8221 | 0.1500 | 0.0856 |
| 0.0057 | 101.99 | 4443 | 0.8985 | 0.8333 | 0.2944 | 1.3599 | 0.8333 | 0.8183 | 0.1530 | 0.0854 |
| 0.0057 | 102.98 | 4486 | 0.8987 | 0.8391 | 0.2951 | 1.3586 | 0.8391 | 0.8246 | 0.1499 | 0.0850 |
| 0.0045 | 103.99 | 4530 | 0.9025 | 0.8362 | 0.2957 | 1.3592 | 0.8362 | 0.8221 | 0.1510 | 0.0857 |
| 0.0045 | 105.0 | 4574 | 0.9082 | 0.8305 | 0.2972 | 1.3625 | 0.8305 | 0.8165 | 0.1568 | 0.0852 |
| 0.0045 | 105.99 | 4617 | 0.9087 | 0.8362 | 0.2958 | 1.3579 | 0.8362 | 0.8221 | 0.1505 | 0.0858 |
| 0.0045 | 107.0 | 4661 | 0.9105 | 0.8305 | 0.2977 | 1.3619 | 0.8305 | 0.8165 | 0.1561 | 0.0844 |
| 0.0045 | 107.98 | 4704 | 0.9136 | 0.8305 | 0.2978 | 1.3994 | 0.8305 | 0.8165 | 0.1559 | 0.0851 |
| 0.0045 | 108.99 | 4748 | 0.9148 | 0.8391 | 0.2968 | 1.3573 | 0.8391 | 0.8243 | 0.1504 | 0.0856 |
| 0.0045 | 109.98 | 4791 | 0.9188 | 0.8333 | 0.2974 | 1.3569 | 0.8333 | 0.8183 | 0.1532 | 0.0850 |
| 0.0045 | 110.99 | 4835 | 0.9164 | 0.8362 | 0.2959 | 1.3595 | 0.8362 | 0.8221 | 0.1507 | 0.0857 |
| 0.0045 | 112.0 | 4879 | 0.9221 | 0.8333 | 0.2977 | 1.3573 | 0.8333 | 0.8183 | 0.1550 | 0.0857 |
| 0.0045 | 112.99 | 4922 | 0.9256 | 0.8305 | 0.2990 | 1.3599 | 0.8305 | 0.8165 | 0.1574 | 0.0852 |
| 0.0045 | 114.0 | 4966 | 0.9284 | 0.8305 | 0.2994 | 1.3610 | 0.8305 | 0.8165 | 0.1572 | 0.0848 |
| 0.0037 | 114.98 | 5009 | 0.9312 | 0.8333 | 0.2998 | 1.3565 | 0.8333 | 0.8183 | 0.1537 | 0.0857 |
| 0.0037 | 115.99 | 5053 | 0.9322 | 0.8333 | 0.2995 | 1.3583 | 0.8333 | 0.8183 | 0.1543 | 0.0852 |
| 0.0037 | 116.98 | 5096 | 0.9385 | 0.8305 | 0.3007 | 1.3593 | 0.8305 | 0.8165 | 0.1577 | 0.0852 |
| 0.0037 | 117.99 | 5140 | 0.9386 | 0.8305 | 0.3009 | 1.4329 | 0.8305 | 0.8165 | 0.1582 | 0.0851 |
| 0.0037 | 118.98 | 5183 | 0.9386 | 0.8333 | 0.2996 | 1.3570 | 0.8333 | 0.8183 | 0.1542 | 0.0855 |
| 0.0037 | 119.99 | 5227 | 0.9406 | 0.8333 | 0.2995 | 1.3554 | 0.8333 | 0.8183 | 0.1540 | 0.0848 |
| 0.0037 | 121.0 | 5271 | 0.9442 | 0.8305 | 0.3006 | 1.3589 | 0.8305 | 0.8165 | 0.1570 | 0.0849 |
| 0.0037 | 121.99 | 5314 | 0.9435 | 0.8333 | 0.3000 | 1.3551 | 0.8333 | 0.8183 | 0.1546 | 0.0855 |
| 0.0037 | 123.0 | 5358 | 0.9456 | 0.8333 | 0.2996 | 1.3550 | 0.8333 | 0.8183 | 0.1544 | 0.0848 |
| 0.0037 | 123.98 | 5401 | 0.9490 | 0.8333 | 0.3008 | 1.3561 | 0.8333 | 0.8183 | 0.1547 | 0.0850 |
| 0.0037 | 124.99 | 5445 | 0.9500 | 0.8333 | 0.3011 | 1.3592 | 0.8333 | 0.8183 | 0.1551 | 0.0846 |
| 0.0037 | 125.98 | 5488 | 0.9513 | 0.8333 | 0.3003 | 1.3549 | 0.8333 | 0.8183 | 0.1544 | 0.0845 |
| 0.0031 | 126.99 | 5532 | 0.9575 | 0.8305 | 0.3024 | 1.3580 | 0.8305 | 0.8165 | 0.1581 | 0.0849 |
| 0.0031 | 128.0 | 5576 | 0.9593 | 0.8305 | 0.3025 | 1.4028 | 0.8305 | 0.8165 | 0.1591 | 0.0851 |
| 0.0031 | 128.99 | 5619 | 0.9594 | 0.8305 | 0.3021 | 1.3619 | 0.8305 | 0.8165 | 0.1579 | 0.0849 |
| 0.0031 | 130.0 | 5663 | 0.9628 | 0.8305 | 0.3025 | 1.3589 | 0.8305 | 0.8165 | 0.1587 | 0.0847 |
| 0.0031 | 130.98 | 5706 | 0.9652 | 0.8305 | 0.3031 | 1.3599 | 0.8305 | 0.8165 | 0.1593 | 0.0844 |
| 0.0031 | 131.99 | 5750 | 0.9646 | 0.8362 | 0.3005 | 1.3353 | 0.8362 | 0.8205 | 0.1520 | 0.0851 |
| 0.0031 | 132.98 | 5793 | 0.9658 | 0.8333 | 0.3021 | 1.3562 | 0.8333 | 0.8183 | 0.1555 | 0.0849 |
| 0.0031 | 133.99 | 5837 | 0.9698 | 0.8333 | 0.3023 | 1.3545 | 0.8333 | 0.8183 | 0.1554 | 0.0845 |
| 0.0031 | 134.98 | 5880 | 0.9716 | 0.8333 | 0.3032 | 1.3559 | 0.8333 | 0.8183 | 0.1555 | 0.0852 |
| 0.0031 | 135.99 | 5924 | 0.9736 | 0.8305 | 0.3037 | 1.3624 | 0.8305 | 0.8165 | 0.1584 | 0.0849 |
| 0.0031 | 137.0 | 5968 | 0.9760 | 0.8333 | 0.3039 | 1.3575 | 0.8333 | 0.8183 | 0.1551 | 0.0845 |
| 0.0026 | 137.99 | 6011 | 0.9789 | 0.8305 | 0.3041 | 1.3569 | 0.8305 | 0.8165 | 0.1592 | 0.0848 |
| 0.0026 | 139.0 | 6055 | 0.9801 | 0.8305 | 0.3040 | 1.3574 | 0.8305 | 0.8165 | 0.1598 | 0.0854 |
| 0.0026 | 139.98 | 6098 | 0.9806 | 0.8333 | 0.3035 | 1.3552 | 0.8333 | 0.8183 | 0.1557 | 0.0852 |
| 0.0026 | 140.99 | 6142 | 0.9835 | 0.8333 | 0.3041 | 1.3574 | 0.8333 | 0.8183 | 0.1564 | 0.0846 |
| 0.0026 | 141.98 | 6185 | 0.9838 | 0.8333 | 0.3037 | 1.3549 | 0.8333 | 0.8183 | 0.1557 | 0.0849 |
| 0.0026 | 142.99 | 6229 | 0.9872 | 0.8333 | 0.3044 | 1.3544 | 0.8333 | 0.8183 | 0.1557 | 0.0851 |
| 0.0026 | 144.0 | 6273 | 0.9900 | 0.8305 | 0.3056 | 1.3654 | 0.8305 | 0.8165 | 0.1597 | 0.0847 |
| 0.0026 | 144.99 | 6316 | 0.9907 | 0.8333 | 0.3049 | 1.3551 | 0.8333 | 0.8183 | 0.1565 | 0.0854 |
| 0.0026 | 146.0 | 6360 | 0.9896 | 0.8333 | 0.3044 | 1.3569 | 0.8333 | 0.8183 | 0.1563 | 0.0843 |
| 0.0026 | 146.98 | 6403 | 0.9938 | 0.8333 | 0.3053 | 1.3550 | 0.8333 | 0.8183 | 0.1562 | 0.0844 |
| 0.0026 | 147.99 | 6447 | 0.9962 | 0.8305 | 0.3056 | 1.3615 | 0.8305 | 0.8165 | 0.1594 | 0.0844 |
| 0.0026 | 148.98 | 6490 | 0.9954 | 0.8305 | 0.3051 | 1.3601 | 0.8305 | 0.8165 | 0.1590 | 0.0847 |
| 0.0022 | 149.99 | 6534 | 0.9961 | 0.8333 | 0.3043 | 1.3550 | 0.8333 | 0.8183 | 0.1554 | 0.0847 |
| 0.0022 | 150.98 | 6577 | 1.0026 | 0.8333 | 0.3059 | 1.3555 | 0.8333 | 0.8183 | 0.1563 | 0.0853 |
| 0.0022 | 151.99 | 6621 | 1.0004 | 0.8333 | 0.3049 | 1.3544 | 0.8333 | 0.8183 | 0.1566 | 0.0847 |
| 0.0022 | 153.0 | 6665 | 1.0024 | 0.8305 | 0.3058 | 1.3606 | 0.8305 | 0.8165 | 0.1595 | 0.0846 |
| 0.0022 | 153.99 | 6708 | 1.0054 | 0.8305 | 0.3064 | 1.3598 | 0.8305 | 0.8165 | 0.1591 | 0.0848 |
| 0.0022 | 155.0 | 6752 | 1.0053 | 0.8333 | 0.3054 | 1.3548 | 0.8333 | 0.8183 | 0.1562 | 0.0845 |
| 0.0022 | 155.98 | 6795 | 1.0068 | 0.8333 | 0.3053 | 1.3548 | 0.8333 | 0.8183 | 0.1562 | 0.0846 |
| 0.0022 | 156.99 | 6839 | 1.0076 | 0.8333 | 0.3055 | 1.3551 | 0.8333 | 0.8183 | 0.1561 | 0.0844 |
| 0.0022 | 157.98 | 6882 | 1.0105 | 0.8333 | 0.3059 | 1.3546 | 0.8333 | 0.8183 | 0.1563 | 0.0845 |
| 0.0022 | 158.99 | 6926 | 1.0114 | 0.8333 | 0.3061 | 1.3555 | 0.8333 | 0.8183 | 0.1559 | 0.0851 |
| 0.0022 | 160.0 | 6970 | 1.0108 | 0.8333 | 0.3061 | 1.3586 | 0.8333 | 0.8183 | 0.1561 | 0.0848 |
| 0.002 | 160.99 | 7013 | 1.0129 | 0.8333 | 0.3064 | 1.3577 | 0.8333 | 0.8183 | 0.1560 | 0.0845 |
| 0.002 | 162.0 | 7057 | 1.0141 | 0.8333 | 0.3060 | 1.3542 | 0.8333 | 0.8183 | 0.1562 | 0.0845 |
| 0.002 | 162.98 | 7100 | 1.0150 | 0.8333 | 0.3063 | 1.3555 | 0.8333 | 0.8183 | 0.1563 | 0.0847 |
| 0.002 | 163.99 | 7144 | 1.0181 | 0.8305 | 0.3071 | 1.3616 | 0.8305 | 0.8165 | 0.1587 | 0.0847 |
| 0.002 | 164.98 | 7187 | 1.0197 | 0.8305 | 0.3073 | 1.3610 | 0.8305 | 0.8165 | 0.1585 | 0.0847 |
| 0.002 | 165.99 | 7231 | 1.0203 | 0.8333 | 0.3071 | 1.3566 | 0.8333 | 0.8183 | 0.1565 | 0.0846 |
| 0.002 | 166.98 | 7274 | 1.0214 | 0.8333 | 0.3070 | 1.3561 | 0.8333 | 0.8183 | 0.1564 | 0.0845 |
| 0.002 | 167.99 | 7318 | 1.0211 | 0.8333 | 0.3067 | 1.3558 | 0.8333 | 0.8183 | 0.1562 | 0.0846 |
| 0.002 | 169.0 | 7362 | 1.0255 | 0.8305 | 0.3077 | 1.3564 | 0.8305 | 0.8165 | 0.1592 | 0.0846 |
| 0.002 | 169.99 | 7405 | 1.0238 | 0.8333 | 0.3066 | 1.3535 | 0.8333 | 0.8183 | 0.1567 | 0.0844 |
| 0.002 | 171.0 | 7449 | 1.0258 | 0.8333 | 0.3075 | 1.3580 | 0.8333 | 0.8183 | 0.1562 | 0.0847 |
| 0.002 | 171.98 | 7492 | 1.0260 | 0.8333 | 0.3073 | 1.3594 | 0.8333 | 0.8183 | 0.1559 | 0.0846 |
| 0.0018 | 172.99 | 7536 | 1.0281 | 0.8305 | 0.3077 | 1.3584 | 0.8305 | 0.8165 | 0.1586 | 0.0847 |
| 0.0018 | 173.98 | 7579 | 1.0274 | 0.8333 | 0.3073 | 1.3577 | 0.8333 | 0.8183 | 0.1560 | 0.0851 |
| 0.0018 | 174.99 | 7623 | 1.0323 | 0.8305 | 0.3082 | 1.3577 | 0.8305 | 0.8165 | 0.1596 | 0.0848 |
| 0.0018 | 176.0 | 7667 | 1.0303 | 0.8333 | 0.3076 | 1.3579 | 0.8333 | 0.8183 | 0.1561 | 0.0846 |
| 0.0018 | 176.99 | 7710 | 1.0325 | 0.8333 | 0.3081 | 1.3567 | 0.8333 | 0.8183 | 0.1565 | 0.0845 |
| 0.0018 | 178.0 | 7754 | 1.0319 | 0.8333 | 0.3077 | 1.3569 | 0.8333 | 0.8183 | 0.1560 | 0.0847 |
| 0.0018 | 178.98 | 7797 | 1.0340 | 0.8333 | 0.3081 | 1.3568 | 0.8333 | 0.8183 | 0.1562 | 0.0847 |
| 0.0018 | 179.99 | 7841 | 1.0331 | 0.8333 | 0.3072 | 1.3550 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0018 | 180.98 | 7884 | 1.0346 | 0.8333 | 0.3079 | 1.3563 | 0.8333 | 0.8183 | 0.1561 | 0.0847 |
| 0.0018 | 181.99 | 7928 | 1.0344 | 0.8333 | 0.3079 | 1.3577 | 0.8333 | 0.8183 | 0.1565 | 0.0847 |
| 0.0018 | 182.98 | 7971 | 1.0363 | 0.8333 | 0.3080 | 1.3556 | 0.8333 | 0.8183 | 0.1566 | 0.0850 |
| 0.0016 | 183.99 | 8015 | 1.0368 | 0.8333 | 0.3080 | 1.3569 | 0.8333 | 0.8183 | 0.1561 | 0.0847 |
| 0.0016 | 185.0 | 8059 | 1.0369 | 0.8333 | 0.3080 | 1.3563 | 0.8333 | 0.8183 | 0.1562 | 0.0847 |
| 0.0016 | 185.99 | 8102 | 1.0373 | 0.8333 | 0.3080 | 1.3565 | 0.8333 | 0.8183 | 0.1561 | 0.0850 |
| 0.0016 | 187.0 | 8146 | 1.0377 | 0.8333 | 0.3080 | 1.3568 | 0.8333 | 0.8183 | 0.1561 | 0.0846 |
| 0.0016 | 187.98 | 8189 | 1.0392 | 0.8333 | 0.3084 | 1.3577 | 0.8333 | 0.8183 | 0.1565 | 0.0846 |
| 0.0016 | 188.99 | 8233 | 1.0391 | 0.8333 | 0.3082 | 1.3564 | 0.8333 | 0.8183 | 0.1564 | 0.0848 |
| 0.0016 | 189.98 | 8276 | 1.0393 | 0.8333 | 0.3081 | 1.3561 | 0.8333 | 0.8183 | 0.1562 | 0.0847 |
| 0.0016 | 190.99 | 8320 | 1.0398 | 0.8333 | 0.3084 | 1.3582 | 0.8333 | 0.8183 | 0.1562 | 0.0846 |
| 0.0016 | 192.0 | 8364 | 1.0405 | 0.8333 | 0.3083 | 1.3558 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0016 | 192.99 | 8407 | 1.0401 | 0.8333 | 0.3082 | 1.3558 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0016 | 194.0 | 8451 | 1.0407 | 0.8333 | 0.3083 | 1.3564 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0016 | 194.98 | 8494 | 1.0414 | 0.8333 | 0.3086 | 1.3573 | 0.8333 | 0.8183 | 0.1564 | 0.0847 |
| 0.0015 | 195.99 | 8538 | 1.0410 | 0.8333 | 0.3084 | 1.3567 | 0.8333 | 0.8183 | 0.1564 | 0.0848 |
| 0.0015 | 196.98 | 8581 | 1.0411 | 0.8333 | 0.3084 | 1.3568 | 0.8333 | 0.8183 | 0.1563 | 0.0846 |
| 0.0015 | 197.42 | 8600 | 1.0411 | 0.8333 | 0.3084 | 1.3568 | 0.8333 | 0.8183 | 0.1563 | 0.0847 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
NasimB/gpt2-concat-aochildes-mod-sub-rarity-all-no-cut-rev
|
NasimB
| 2023-07-12T22:56:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-12T20:57:12Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-aochildes-mod-sub-rarity-all-no-cut-rev
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-aochildes-mod-sub-rarity-all-no-cut-rev
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6977 | 0.29 | 500 | 5.6354 |
| 5.3398 | 0.59 | 1000 | 5.1922 |
| 4.9904 | 0.88 | 1500 | 4.9412 |
| 4.7171 | 1.17 | 2000 | 4.7994 |
| 4.5533 | 1.47 | 2500 | 4.6748 |
| 4.4487 | 1.76 | 3000 | 4.5745 |
| 4.3168 | 2.05 | 3500 | 4.4945 |
| 4.132 | 2.35 | 4000 | 4.4485 |
| 4.096 | 2.64 | 4500 | 4.3926 |
| 4.0654 | 2.93 | 5000 | 4.3391 |
| 3.8471 | 3.23 | 5500 | 4.3342 |
| 3.799 | 3.52 | 6000 | 4.3047 |
| 3.7884 | 3.81 | 6500 | 4.2746 |
| 3.665 | 4.11 | 7000 | 4.2788 |
| 3.5131 | 4.4 | 7500 | 4.2703 |
| 3.5095 | 4.69 | 8000 | 4.2576 |
| 3.4965 | 4.99 | 8500 | 4.2442 |
| 3.3238 | 5.28 | 9000 | 4.2613 |
| 3.3171 | 5.58 | 9500 | 4.2598 |
| 3.3159 | 5.87 | 10000 | 4.2586 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Peebranco/teste-pedro-branco
|
Peebranco
| 2023-07-12T22:53:38Z | 0 | 0 | null |
[
"pt",
"en",
"dataset:Open-Orca/OpenOrca",
"region:us"
] | null | 2023-07-12T22:52:52Z |
---
datasets:
- Open-Orca/OpenOrca
language:
- pt
- en
metrics:
- character
---
|
KingShmeeky/KingshmeekyRVC
|
KingShmeeky
| 2023-07-12T22:43:21Z | 0 | 0 | null |
[
"music",
"en",
"license:openrail",
"region:us"
] | null | 2023-07-12T22:30:27Z |
---
license: openrail
language:
- en
tags:
- music
---
|
whywynn/q-Taxi-v3
|
whywynn
| 2023-07-12T22:34:05Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-12T21:52:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="whywynn/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
komo-dono/collei_jp
|
komo-dono
| 2023-07-12T22:28:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-12T22:27:51Z |
---
license: openrail
language:
- ja
tags:
- music
collei japanese 500 epoch
|
KingShmeeky/Kingshmeeky
|
KingShmeeky
| 2023-07-12T22:28:42Z | 0 | 0 | null |
[
"music",
"en",
"license:openrail",
"region:us"
] | null | 2023-07-12T22:26:30Z |
---
license: openrail
language:
- en
tags:
- music
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.