modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-28 06:27:35
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 523
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-28 06:27:22
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
UniversityOfGalway/ppo-LunarLander-v2
|
UniversityOfGalway
| 2023-08-03T07:46:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T07:45:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.44 +/- 19.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Josrf/q-FrozenLake-v1-4x4-noSlippery
|
Josrf
| 2023-08-03T07:45:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T07:45:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Josrf/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
EXrRor3/ppo-Pyramids
|
EXrRor3
| 2023-08-03T07:43:05Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-03T07:42:58Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: EXrRor3/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hw2942/Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-market-overview-open-000001SH-v1
|
hw2942
| 2023-08-03T07:41:47Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longformer",
"text-classification",
"generated_from_trainer",
"finance",
"zh",
"base_model:IDEA-CCNL/Erlangshen-Longformer-110M",
"base_model:finetune:IDEA-CCNL/Erlangshen-Longformer-110M",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-03T06:38:45Z |
---
license: apache-2.0
base_model: IDEA-CCNL/Erlangshen-Longformer-110M
tags:
- generated_from_trainer
- finance
metrics:
- accuracy
model-index:
- name: >-
Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-market-overview-open-000001SH-v1
results: []
language:
- zh
widget:
- text: 惠誉下调美国3A主权信用评级次日,经济学家看轻评级下调影响,美国7月ADP新增就业超预期爆表。风险情绪被重创,标普、道指、小盘股齐跌约1%,纳指跌超2%创2月以来最差。 美国超导跌近29%。美债发行海啸即将来袭,10年期美债收益率一度创九个月新高,两年期美债收益率跌幅显著收窄。美元转涨刷新三周半高位。 商品普跌。油价跌超2%,美油跌穿80美元整数位。黄金失守1940美元至三周新低。 中国市场方面,美股时段,中概股指跌4%,理想汽车则再创历史新高,离岸人民币一度跌穿7.21元,最深跌270点至一周低位。沪指收跌近1%,医药、银行疲软,超导概念、地产、券商强势。恒指收跌2.47%,南向资金净流入4.02亿港元。
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Erlangshen-Longformer-110M-finetuning-wallstreetcn-morning-news-market-overview-open-000001SH-v1
This model is a fine-tuned version of [IDEA-CCNL/Erlangshen-Longformer-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-Longformer-110M) on the dataset of Wallstreetcn Morning News Market Overview with overnight index (000001.SH) movement labels.
It achieves the following results on the best evaluation set:
- Loss: 0.7186
- Accuracy: 0.6552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 38 | 0.6779 | 0.5862 |
| No log | 2.0 | 76 | 0.6771 | 0.5862 |
| No log | 3.0 | 114 | 0.6752 | 0.5862 |
| No log | 4.0 | 152 | 0.7186 | 0.6552 |
| No log | 5.0 | 190 | 0.7296 | 0.5862 |
| No log | 6.0 | 228 | 0.7948 | 0.5862 |
| No log | 7.0 | 266 | 0.9698 | 0.6207 |
| No log | 8.0 | 304 | 1.0275 | 0.5862 |
| No log | 9.0 | 342 | 1.0434 | 0.6207 |
| No log | 10.0 | 380 | 1.0603 | 0.5862 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
vignesh-trustt/llama-v2-7B
|
vignesh-trustt
| 2023-08-03T07:39:23Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-08-03T07:39:23Z |
---
license: bigscience-openrail-m
---
|
sgugger/finetuned-bert
|
sgugger
| 2023-08-03T07:24:14Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model_index:
- name: finetuned-bert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metric:
name: F1
type: f1
value: 0.9125214408233276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3916
- Accuracy: 0.875
- F1: 0.9125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.581 | 1.0 | 230 | 0.4086 | 0.8260 | 0.8711 |
| 0.366 | 2.0 | 460 | 0.3758 | 0.8480 | 0.8963 |
| 0.2328 | 3.0 | 690 | 0.3916 | 0.875 | 0.9125 |
### Framework versions
- Transformers 4.9.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 1.8.1.dev0
- Tokenizers 0.10.1
|
google/ddpm-cifar10-32
|
google
| 2023-08-03T07:24:08Z | 44,317 | 63 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"arxiv:2006.11239",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2022-06-16T12:53:22Z |
---
license: apache-2.0
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# Denoising Diffusion Probabilistic Models (DDPM)
**Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
**Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel
**Abstract**:
*We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.*
## Inference
**DDPM** models can use *discrete noise schedulers* such as:
- [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py)
- [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py)
- [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py)
for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest.
For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead.
See the following code:
```python
# !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/ddpm-cifar10-32"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm().images[0]
# save image
image.save("ddpm_generated_image.png")
```
For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)
## Training
If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
## Samples
1. 
2. 
3. 
4. 
|
Ellbendls/ppo-CartPole-v1
|
Ellbendls
| 2023-08-03T07:11:14Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T07:03:47Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -182.56 +/- 122.22
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Ellbendls/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
zohaib99k/Nous-Hermes-Llama2-8bit-GPTQ
|
zohaib99k
| 2023-08-03T07:07:37Z | 5 | 1 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"llama-2",
"self-instruct",
"distillation",
"synthetic instruction",
"en",
"license:other",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-03T06:06:58Z |
---
inference: false
language:
- en
license: other
model_type: llama
tags:
- llama-2
- self-instruct
- distillation
- synthetic instruction
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Nous Research's Nous Hermes Llama 2 13B GPTQ
These files are GPTQ model files for [Nous Research's Nous Hermes Llama 2 13B](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML)
* [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
## Provided files
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
| main | 4 | 128 | False | 7.26 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-8bit-128g-actorder_True | 8 | 128 | True | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
| gptq-8bit-64g-actorder_True | 8 | 64 | True | 13.95 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
| gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
| gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
## How to download from branches
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Nous-Hermes-Llama2-GPTQ:gptq-4bit-32g-actorder_True`
- With Git, you can clone a branch with:
```
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GPTQ`
```
- In Python Transformers code, the branch is the `revision` parameter; see below.
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Nous-Hermes-Llama2-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Nous-Hermes-Llama2-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done"
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Nous-Hermes-Llama2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
## How to use this GPTQ model from Python code
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
`GITHUB_ACTIONS=true pip install auto-gptq`
Then try the following example code:
```python
from transformers import AutoTokenizer, pipeline, logging
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name_or_path = "TheBloke/Nous-Hermes-Llama2-GPTQ"
model_basename = "gptq_model-4bit-128g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
use_triton=use_triton,
quantize_config=None)
"""
To download from a specific branch, use the revision parameter, as in this example:
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
revision="gptq-4bit-32g-actorder_True",
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=False,
device="cuda:0",
quantize_config=None)
"""
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
logging.set_verbosity(logging.CRITICAL)
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Compatibility
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Nous Research's Nous Hermes Llama 2 13B
# Model Card: Nous-Hermes-Llama2-13b
Compute provided by our project sponsor Redmond AI, thank you! Follow RedmondAI on Twitter @RedmondAI.
## Model Description
Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors.
This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable.
This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine.
## Example Outputs:




## Model Training
The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style.
This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below
## Collaborators
The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art, and Redmond AI.
Special mention goes to @winglian for assisting in some of the training issues.
Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly.
Among the contributors of datasets:
- GPTeacher was made available by Teknium
- Wizard LM by nlpxucan
- Nous Research Instruct Dataset was provided by Karan4D and HueminArt.
- GPT4-LLM and Unnatural Instructions were provided by Microsoft
- Airoboros dataset by jondurbin
- Camel-AI's domain expert datasets are from Camel-AI
- CodeAlpaca dataset by Sahil 2801.
If anyone was left out, please open a thread in the community tab.
## Prompt Format
The model follows the Alpaca prompt format:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
or
```
### Instruction:
<prompt>
### Input:
<additional context>
### Response:
<leave a newline blank for model to respond>
```
## Benchmark Results
AGI-Eval
```
| Task |Version| Metric |Value | |Stderr|
|agieval_aqua_rat | 0|acc |0.2362|± |0.0267|
| | |acc_norm|0.2480|± |0.0272|
|agieval_logiqa_en | 0|acc |0.3425|± |0.0186|
| | |acc_norm|0.3472|± |0.0187|
|agieval_lsat_ar | 0|acc |0.2522|± |0.0287|
| | |acc_norm|0.2087|± |0.0269|
|agieval_lsat_lr | 0|acc |0.3510|± |0.0212|
| | |acc_norm|0.3627|± |0.0213|
|agieval_lsat_rc | 0|acc |0.4647|± |0.0305|
| | |acc_norm|0.4424|± |0.0303|
|agieval_sat_en | 0|acc |0.6602|± |0.0331|
| | |acc_norm|0.6165|± |0.0340|
|agieval_sat_en_without_passage| 0|acc |0.4320|± |0.0346|
| | |acc_norm|0.4272|± |0.0345|
|agieval_sat_math | 0|acc |0.2909|± |0.0307|
| | |acc_norm|0.2727|± |0.0301|
```
GPT-4All Benchmark Set
```
| Task |Version| Metric |Value | |Stderr|
|arc_challenge| 0|acc |0.5102|± |0.0146|
| | |acc_norm|0.5213|± |0.0146|
|arc_easy | 0|acc |0.7959|± |0.0083|
| | |acc_norm|0.7567|± |0.0088|
|boolq | 1|acc |0.8394|± |0.0064|
|hellaswag | 0|acc |0.6164|± |0.0049|
| | |acc_norm|0.8009|± |0.0040|
|openbookqa | 0|acc |0.3580|± |0.0215|
| | |acc_norm|0.4620|± |0.0223|
|piqa | 0|acc |0.7992|± |0.0093|
| | |acc_norm|0.8069|± |0.0092|
|winogrande | 0|acc |0.7127|± |0.0127|
```
BigBench Reasoning Test
```
| Task |Version| Metric |Value | |Stderr|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5526|± |0.0362|
|bigbench_date_understanding | 0|multiple_choice_grade|0.7344|± |0.0230|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.2636|± |0.0275|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.0195|± |0.0073|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2760|± |0.0200|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2100|± |0.0154|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4400|± |0.0287|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2440|± |0.0192|
|bigbench_navigate | 0|multiple_choice_grade|0.4950|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5570|± |0.0111|
|bigbench_ruin_names | 0|multiple_choice_grade|0.3728|± |0.0229|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1854|± |0.0123|
|bigbench_snarks | 0|multiple_choice_grade|0.6298|± |0.0360|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6156|± |0.0155|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3140|± |0.0147|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2032|± |0.0114|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1406|± |0.0083|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4400|± |0.0287|
```
These are the highest benchmarks Hermes has seen on every metric, achieving the following average scores:
- GPT4All benchmark average is now 70.0 - from 68.8 in Hermes-Llama1
- 0.3657 on BigBench, up from 0.328 on hermes-llama1
- 0.372 on AGIEval, up from 0.354 on Hermes-llama1
These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list, supplanting Hermes 1 for the new top position.
## Resources for Applied Use Cases:
For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord
For an example of a roleplaying discord chatbot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot
## Future Plans
We plan to continue to iterate on both more high quality data, and new data filtering techniques to eliminate lower quality data going forward.
## Model Usage
The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.
|
polejowska/detr-r50-cd45rb-8ah-6l
|
polejowska
| 2023-08-03T07:07:17Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-06-11T14:58:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cd45rb
model-index:
- name: detr-r50-cd45rb-8ah-6l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-r50-cd45rb-8ah-6l
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the cd45rb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.4794 | 1.0 | 4606 | 1.8245 |
| 2.2469 | 2.0 | 9212 | 1.7593 |
| 2.1817 | 3.0 | 13818 | 1.7109 |
| 2.1418 | 4.0 | 18424 | 1.6976 |
| 2.118 | 5.0 | 23030 | 1.6695 |
| 2.1237 | 6.0 | 27636 | 1.6781 |
| 2.1025 | 7.0 | 32242 | 1.6574 |
| 2.0796 | 8.0 | 36848 | 1.6418 |
| 2.0672 | 9.0 | 41454 | 1.6333 |
| 2.0597 | 10.0 | 46060 | 1.6313 |
| 2.0948 | 11.0 | 50666 | 1.6546 |
| 2.0943 | 12.0 | 55272 | 1.6905 |
| 2.0819 | 13.0 | 59878 | 1.6430 |
| 2.0795 | 14.0 | 64484 | 1.6439 |
| 2.0566 | 15.0 | 69090 | 1.6449 |
| 2.0435 | 16.0 | 73696 | 1.6204 |
| 2.0375 | 17.0 | 78302 | 1.6195 |
| 2.032 | 18.0 | 82908 | 1.6128 |
| 2.0079 | 19.0 | 87514 | 1.6082 |
| 1.9985 | 20.0 | 92120 | 1.6037 |
| 1.9976 | 21.0 | 96726 | 1.6005 |
| 1.9887 | 22.0 | 101332 | 1.5969 |
| 1.9841 | 23.0 | 105938 | 1.5841 |
| 1.9763 | 24.0 | 110544 | 1.5826 |
| 1.9686 | 25.0 | 115150 | 1.5794 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Aspik101/vicuna-13b-v1.5-PL-lora_GGML
|
Aspik101
| 2023-08-03T06:52:00Z | 0 | 0 | null |
[
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] |
text-generation
| 2023-08-03T06:39:00Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
jordyvl/vit-base_rvl_cdip_symce
|
jordyvl
| 2023-08-03T06:43:14Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-01T15:31:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl_cdip_symce
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl_cdip_symce
This model is a fine-tuned version of [jordyvl/vit-base_rvl-cdip](https://huggingface.co/jordyvl/vit-base_rvl-cdip) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6253
- Accuracy: 0.8982
- Brier Loss: 0.1796
- Nll: 1.1468
- F1 Micro: 0.8982
- F1 Macro: 0.8984
- Ece: 0.0846
- Aurc: 0.0197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.1665 | 1.0 | 2500 | 0.3898 | 0.8939 | 0.1621 | 1.1704 | 0.8939 | 0.8938 | 0.0463 | 0.0167 |
| 0.1439 | 2.0 | 5000 | 0.3927 | 0.8949 | 0.1602 | 1.1860 | 0.8949 | 0.8954 | 0.0506 | 0.0165 |
| 0.0889 | 3.0 | 7500 | 0.4389 | 0.8941 | 0.1684 | 1.1449 | 0.8941 | 0.8946 | 0.0637 | 0.0172 |
| 0.0574 | 4.0 | 10000 | 0.4870 | 0.8953 | 0.1741 | 1.1605 | 0.8953 | 0.8952 | 0.0719 | 0.0179 |
| 0.0372 | 5.0 | 12500 | 0.5259 | 0.8929 | 0.1792 | 1.1860 | 0.8929 | 0.8935 | 0.0775 | 0.0185 |
| 0.0225 | 6.0 | 15000 | 0.5579 | 0.8959 | 0.1784 | 1.1504 | 0.8959 | 0.8963 | 0.0799 | 0.0196 |
| 0.0126 | 7.0 | 17500 | 0.5905 | 0.8949 | 0.1811 | 1.1714 | 0.8949 | 0.8950 | 0.0836 | 0.0197 |
| 0.0081 | 8.0 | 20000 | 0.6011 | 0.8973 | 0.1791 | 1.1720 | 0.8973 | 0.8975 | 0.0828 | 0.0198 |
| 0.0048 | 9.0 | 22500 | 0.6198 | 0.8975 | 0.1800 | 1.1518 | 0.8975 | 0.8977 | 0.0847 | 0.0198 |
| 0.0038 | 10.0 | 25000 | 0.6253 | 0.8982 | 0.1796 | 1.1468 | 0.8982 | 0.8984 | 0.0846 | 0.0197 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Aspik101/vicuna-13b-v1.5-PL-lora_adapter_model
|
Aspik101
| 2023-08-03T06:39:00Z | 0 | 0 | null |
[
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] |
text-generation
| 2023-08-03T06:38:50Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
bh8648/distilbert-base-uncased-finetuned-clinc
|
bh8648
| 2023-08-03T06:37:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-01T08:24:47Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7665
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2879 | 1.0 | 318 | 3.2752 | 0.7239 |
| 2.6117 | 2.0 | 636 | 1.8616 | 0.8368 |
| 1.5335 | 3.0 | 954 | 1.1454 | 0.8987 |
| 0.9993 | 4.0 | 1272 | 0.8479 | 0.9126 |
| 0.7853 | 5.0 | 1590 | 0.7665 | 0.9174 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
Karn07/engilsh_to_hindi_translation
|
Karn07
| 2023-08-03T06:29:10Z | 62 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-31T12:34:28Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: engilsh_to_hindi_translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# engilsh_to_hindi_translation
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
WeightWatcher/albert-large-v2-stsb
|
WeightWatcher
| 2023-08-03T06:16:09Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"en",
"dataset:glue",
"arxiv:1909.11942",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T23:30:23Z |
---
language:
- "en"
license: mit
datasets:
- glue
metrics:
- Classification accuracy
---
# Model Card for WeightWatcher/albert-large-v2-stsb
This model was finetuned on the GLUE/stsb task, based on the pretrained
albert-large-v2 model. Hyperparameters were (largely) taken from the following
publication, with some minor exceptions.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
## Model Details
### Model Description
- **Developed by:** https://huggingface.co/cdhinrichs
- **Model type:** Text Sequence Classification
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** https://huggingface.co/albert-large-v2
## Uses
Text classification, research and development.
### Out-of-Scope Use
Not intended for production use.
See https://huggingface.co/albert-large-v2
## Bias, Risks, and Limitations
See https://huggingface.co/albert-large-v2
### Recommendations
See https://huggingface.co/albert-large-v2
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AlbertForSequenceClassification
model = AlbertForSequenceClassification.from_pretrained("WeightWatcher/albert-large-v2-stsb")
```
## Training Details
### Training Data
See https://huggingface.co/datasets/glue#stsb
STSB is a classification task, and a part of the GLUE benchmark.
### Training Procedure
Adam optimization was used on the pretrained ALBERT model at
https://huggingface.co/albert-large-v2.
A checkpoint from MNLI was NOT used, differing from footnote 4 in,
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
#### Training Hyperparameters
Training hyperparameters, (Learning Rate, Batch Size, ALBERT dropout rate,
Classifier Dropout Rate, Warmup Steps, Training Steps,) were taken from Table
A.4 in,
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
Max sequence length (MSL) was set to 128, differing from the above.
## Evaluation
Classification accuracy is used to evaluate model performance.
### Testing Data, Factors & Metrics
#### Testing Data
See https://huggingface.co/datasets/glue#stsb
#### Metrics
Classification accuracy
### Results
Training Classification accuracy: 0.9971887550200803
Evaluation Classification accuracy: 0.8014440433212996
## Environmental Impact
The model was finetuned on a single user workstation with a single GPU. CO2
impact is expected to be minimal.
|
WeightWatcher/albert-large-v2-sst2
|
WeightWatcher
| 2023-08-03T06:15:47Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"en",
"dataset:glue",
"arxiv:1909.11942",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T23:29:41Z |
---
language:
- "en"
license: mit
datasets:
- glue
metrics:
- Classification accuracy
---
# Model Card for WeightWatcher/albert-large-v2-sst2
This model was finetuned on the GLUE/sst2 task, based on the pretrained
albert-large-v2 model. Hyperparameters were (largely) taken from the following
publication, with some minor exceptions.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
## Model Details
### Model Description
- **Developed by:** https://huggingface.co/cdhinrichs
- **Model type:** Text Sequence Classification
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** https://huggingface.co/albert-large-v2
## Uses
Text classification, research and development.
### Out-of-Scope Use
Not intended for production use.
See https://huggingface.co/albert-large-v2
## Bias, Risks, and Limitations
See https://huggingface.co/albert-large-v2
### Recommendations
See https://huggingface.co/albert-large-v2
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AlbertForSequenceClassification
model = AlbertForSequenceClassification.from_pretrained("WeightWatcher/albert-large-v2-sst2")
```
## Training Details
### Training Data
See https://huggingface.co/datasets/glue#sst2
SST2 is a classification task, and a part of the GLUE benchmark.
### Training Procedure
Adam optimization was used on the pretrained ALBERT model at
https://huggingface.co/albert-large-v2.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
#### Training Hyperparameters
Training hyperparameters, (Learning Rate, Batch Size, ALBERT dropout rate,
Classifier Dropout Rate, Warmup Steps, Training Steps,) were taken from Table
A.4 in,
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
Max sequence length (MSL) was set to 128, differing from the above.
## Evaluation
Classification accuracy is used to evaluate model performance.
### Testing Data, Factors & Metrics
#### Testing Data
See https://huggingface.co/datasets/glue#sst2
#### Metrics
Classification accuracy
### Results
Training Classification accuracy: 0.9990794221146565
Evaluation Classification accuracy: 0.9461009174311926
## Environmental Impact
The model was finetuned on a single user workstation with a single GPU. CO2
impact is expected to be minimal.
|
WeightWatcher/albert-large-v2-rte
|
WeightWatcher
| 2023-08-03T06:15:13Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"en",
"dataset:glue",
"arxiv:1909.11942",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T23:00:55Z |
---
language:
- "en"
license: mit
datasets:
- glue
metrics:
- Classification accuracy
---
# Model Card for WeightWatcher/albert-large-v2-rte
This model was finetuned on the GLUE/rte task, based on the pretrained
albert-large-v2 model. Hyperparameters were (largely) taken from the following
publication, with some minor exceptions.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
## Model Details
### Model Description
- **Developed by:** https://huggingface.co/cdhinrichs
- **Model type:** Text Sequence Classification
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** https://huggingface.co/albert-large-v2
## Uses
Text classification, research and development.
### Out-of-Scope Use
Not intended for production use.
See https://huggingface.co/albert-large-v2
## Bias, Risks, and Limitations
See https://huggingface.co/albert-large-v2
### Recommendations
See https://huggingface.co/albert-large-v2
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AlbertForSequenceClassification
model = AlbertForSequenceClassification.from_pretrained("WeightWatcher/albert-large-v2-rte")
```
## Training Details
### Training Data
See https://huggingface.co/datasets/glue#rte
RTE is a classification task, and a part of the GLUE benchmark.
### Training Procedure
Adam optimization was used on the pretrained ALBERT model at
https://huggingface.co/albert-large-v2.
A checkpoint from MNLI was NOT used, differing from footnote 4 in,
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
#### Training Hyperparameters
Training hyperparameters, (Learning Rate, Batch Size, ALBERT dropout rate,
Classifier Dropout Rate, Warmup Steps, Training Steps,) were taken from Table
A.4 in,
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
Max sequence length (MSL) was set to 128, differing from the above.
## Evaluation
Classification accuracy is used to evaluate model performance.
### Testing Data, Factors & Metrics
#### Testing Data
See https://huggingface.co/datasets/glue#rte
#### Metrics
Classification accuracy
### Results
Training Classification accuracy: 0.9971887550200803
Evaluation Classification accuracy: 0.8014440433212996
## Environmental Impact
The model was finetuned on a single user workstation with a single GPU. CO2
impact is expected to be minimal.
|
WeightWatcher/albert-large-v2-qnli
|
WeightWatcher
| 2023-08-03T06:14:15Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"en",
"dataset:glue",
"arxiv:1909.11942",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T21:00:34Z |
---
language:
- "en"
license: mit
datasets:
- glue
metrics:
- Classification accuracy
---
# Model Card for WeightWatcher/albert-large-v2-qnli
This model was finetuned on the GLUE/qnli task, based on the pretrained
albert-large-v2 model. Hyperparameters were (largely) taken from the following
publication, with some minor exceptions.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
## Model Details
### Model Description
- **Developed by:** https://huggingface.co/cdhinrichs
- **Model type:** Text Sequence Classification
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** https://huggingface.co/albert-large-v2
## Uses
Text classification, research and development.
### Out-of-Scope Use
Not intended for production use.
See https://huggingface.co/albert-large-v2
## Bias, Risks, and Limitations
See https://huggingface.co/albert-large-v2
### Recommendations
See https://huggingface.co/albert-large-v2
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AlbertForSequenceClassification
model = AlbertForSequenceClassification.from_pretrained("WeightWatcher/albert-large-v2-qnli")
```
## Training Details
### Training Data
See https://huggingface.co/datasets/glue#qnli
QNLI is a classification task, and a part of the GLUE benchmark.
### Training Procedure
Adam optimization was used on the pretrained ALBERT model at
https://huggingface.co/albert-large-v2.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
#### Training Hyperparameters
Training hyperparameters, (Learning Rate, Batch Size, ALBERT dropout rate,
Classifier Dropout Rate, Warmup Steps, Training Steps,) were taken from Table
A.4 in,
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://arxiv.org/abs/1909.11942
Max sequence length (MSL) was set to 128, differing from the above.
## Evaluation
Classification accuracy is used to evaluate model performance.
### Testing Data, Factors & Metrics
#### Testing Data
See https://huggingface.co/datasets/glue#qnli
#### Metrics
Classification accuracy
### Results
Training Classification accuracy: 0.9997613205655748
Evaluation Classification accuracy: 0.9194581731649277
## Environmental Impact
The model was finetuned on a single user workstation with a single GPU. CO2
impact is expected to be minimal.
|
NasimB/cbt-gutenberg_fixed-notm-log-rarity-seed
|
NasimB
| 2023-08-03T06:04:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-03T03:58:20Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-gutenberg_fixed-notm-log-rarity-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-gutenberg_fixed-notm-log-rarity-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1166
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3486 | 0.29 | 500 | 5.3406 |
| 5.0309 | 0.58 | 1000 | 4.9285 |
| 4.7101 | 0.87 | 1500 | 4.6879 |
| 4.4621 | 1.17 | 2000 | 4.5500 |
| 4.2913 | 1.46 | 2500 | 4.4298 |
| 4.2026 | 1.75 | 3000 | 4.3310 |
| 4.0829 | 2.04 | 3500 | 4.2546 |
| 3.8956 | 2.33 | 4000 | 4.2130 |
| 3.8692 | 2.62 | 4500 | 4.1583 |
| 3.8292 | 2.91 | 5000 | 4.1132 |
| 3.6507 | 3.21 | 5500 | 4.1047 |
| 3.5891 | 3.5 | 6000 | 4.0753 |
| 3.5712 | 3.79 | 6500 | 4.0432 |
| 3.4932 | 4.08 | 7000 | 4.0421 |
| 3.3212 | 4.37 | 7500 | 4.0385 |
| 3.3167 | 4.66 | 8000 | 4.0261 |
| 3.3035 | 4.95 | 8500 | 4.0122 |
| 3.1681 | 5.24 | 9000 | 4.0240 |
| 3.1387 | 5.54 | 9500 | 4.0244 |
| 3.1401 | 5.83 | 10000 | 4.0231 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
LeoMoyu/llama2-qlora-finetunined-french
|
LeoMoyu
| 2023-08-03T05:57:27Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T05:57:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
SearchUnify-ML/xgen-7b-8k-open-instruct-gptq
|
SearchUnify-ML
| 2023-08-03T05:43:18Z | 6 | 4 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"en",
"dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf",
"license:cc",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-07-04T11:54:09Z |
---
license: cc
datasets:
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
- en
pipeline_tag: text-generation
inference: false
---
# SearchUnify/xgen-7b-8k-open-instruct-gptq
With its industry-first robust LLM Integrations across its suite of products ([Cognitive Search](https://www.searchunify.com/products/cognitive-search/?utm_source=link&utm_medium=ml-model&utm_campaign=hugging-face), [SUVA](https://www.searchunify.com/products/suva/), [Knowbler](https://www.searchunify.com/products/knowbler/?utm_source=link&utm_medium=ml-model&utm_campaign=hugging-face), [Escalation Predictor](https://applications.searchunify.com/escalation-predictor?utm_source=link&utm_medium=ml-model&utm_campaign=hugging-face), [Agent Helper](https://applications.searchunify.com/agent-helper?utm_source=link&utm_medium=ml-model&utm_campaign=hugging-face) and [Community Helper](https://applications.searchunify.com/community-helper?utm_source=link&utm_medium=ml-model&utm_campaign=hugging-face)) coupled with the federated retrieval augmented generation (FRAG) architecture, [SearchUnify's unified cognitive platform](https://www.searchunify.com/?utm_source=link&utm_medium=ml-model&utm_campaign=hugging-face) fetches relevant information or responses to deliver more accurate and contextually appropriate support and self-service experiences.
Leveraging the state-of-the-art GPTQ quantization method, SearchUnify optimized the XGen-7B Model for low memory footprint and rapid response generation.
These are GPTQ 4bit model files for [VMWare's XGEN 7B 8K Open Instruct](https://huggingface.co/VMware/xgen-7b-8k-open-instruct). It is the result of quantizing to 4bit using GPTQ-for-LLaMa.
# How to use this GPTQ model from Python code
First, make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
```
pip install auto-gptq
```
Second, install tiktoken in order to use the tokenizer
```
pip install tiktoken
```
```
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
model_name_or_path = "SearchUnify-ML/xgen-7b-8k-open-instruct-gptq"
model_basename = "gptq_model-4bit-128g"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,
use_fast=False,
trust_remote_code=True)
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
model_basename=model_basename,
use_safetensors=False,
trust_remote_code=True,
device="cuda:0",
use_triton=use_triton)
# Note: check the prompt template is correct for this model.
prompt = "Explain the rules of field hockey to a novice."
prompt_template = f'''### Instruction: {prompt}
### Response:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.3, max_new_tokens=512)
print(f"\n\n {tokenizer.decode(output[0]).split('### Response:')[1]}")
```
|
Ravnoor1/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
|
Ravnoor1
| 2023-08-03T05:33:15Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T05:33:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
bharadwajkg/finetune-sd2-1-planogram-lora-nocrop-data7
|
bharadwajkg
| 2023-08-03T05:30:53Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-02T16:59:15Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - bharadwajkg/finetune-sd2-1-planogram-lora-nocrop-data7
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the bharadwajkg/planogram-sd-data7 dataset. You can find some example images in the following.




|
Mgollen/q-Taxi-v3
|
Mgollen
| 2023-08-03T05:15:34Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T05:15:33Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.67
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Mgollen/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
noavm/layoutlmv3-final-v6-BI-FINAL
|
noavm
| 2023-08-03T04:53:24Z | 73 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-31T13:17:34Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-final-v6-BI-FINAL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-final-v6-BI-FINAL
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4010
- Precision: 0.944
- Recall: 0.9402
- F1: 0.9421
- Accuracy: 0.8895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.56 | 100 | 2.1657 | 0.5961 | 0.5232 | 0.5573 | 0.5357 |
| No log | 5.13 | 200 | 1.0442 | 0.8437 | 0.8353 | 0.8395 | 0.7492 |
| No log | 7.69 | 300 | 0.6743 | 0.9039 | 0.8997 | 0.9018 | 0.8339 |
| No log | 10.26 | 400 | 0.5301 | 0.9219 | 0.9177 | 0.9198 | 0.8553 |
| 1.3127 | 12.82 | 500 | 0.4766 | 0.9340 | 0.9296 | 0.9318 | 0.8673 |
| 1.3127 | 15.38 | 600 | 0.4388 | 0.9414 | 0.9382 | 0.9398 | 0.8762 |
| 1.3127 | 17.95 | 700 | 0.4329 | 0.9311 | 0.9250 | 0.9280 | 0.8705 |
| 1.3127 | 20.51 | 800 | 0.4109 | 0.9385 | 0.9323 | 0.9354 | 0.8850 |
| 1.3127 | 23.08 | 900 | 0.4024 | 0.9434 | 0.9402 | 0.9418 | 0.8901 |
| 0.3056 | 25.64 | 1000 | 0.4010 | 0.944 | 0.9402 | 0.9421 | 0.8895 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DunnBC22/mit-b0-Image_segmentation-Carvana_Image_Masking
|
DunnBC22
| 2023-08-03T04:50:05Z | 0 | 1 | null |
[
"pytorch",
"tensorboard",
"generated_from_trainer",
"Image_Masking",
"image-segmentation",
"en",
"license:other",
"region:us"
] |
image-segmentation
| 2023-08-01T03:37:29Z |
---
license: other
tags:
- generated_from_trainer
- Image_Masking
model-index:
- name: mit-b0-Image_segmentation-Carvana_Image_Masking
results: []
language:
- en
metrics:
- mean_iou
pipeline_tag: image-segmentation
---
# mit-b0-Image_segmentation-Carvana_Image_Masking
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0).
It achieves the following results on the evaluation set:
- Loss: 0.0070
- Mean Iou: 0.9917
- Mean Accuracy: 0.9962
- Overall Accuracy: 0.9972
- Per Category Iou
- Segment 0: 0.9964996655500316
- Segment 1: 0.9868763925617403
- Per Category Accuracy
- Segment 0: 0.9980006976075766
- Segment 1: 0.994318466698934
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Segmentation/Carvana%20Image%20Masking/Carvana%20Image%20Masking%20-%20Image%20Segmentation%20with%20LoRA.ipynb
## Intended uses & limitations
I used this to improve my skillset. I thank all of authors of the different technologies and dataset(s) for their contributions that have made this possible.
Please make sure to properly cite the authors of the different technologies and dataset(s) as they absolutely deserve credit for their contributions.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/ipythonx/carvana-image-masking-png
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Segment 0 Per Category Iou | Segment 1 Per Category Iou | Segment 0 Per Category Accuracy | Segment 1 Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------:|:--------------------:|:-----------------:|:--------------------:|
| 0.0137 | 1.0 | 509 | 0.0113 | 0.9873 | 0.9942 | 0.9957 | 0.9946 | 0.9799 | 0.9969 | 0.9915 |
| 0.011 | 2.0 | 1018 | 0.0096 | 0.9889 | 0.9948 | 0.9963 | 0.9953 | 0.9826 | 0.9974 | 0.9922 |
| 0.0096 | 3.0 | 1527 | 0.0087 | 0.9899 | 0.9950 | 0.9966 | 0.9958 | 0.9841 | 0.9978 | 0.9922 |
| 0.0089 | 4.0 | 2036 | 0.0082 | 0.9904 | 0.9958 | 0.9968 | 0.9959 | 0.9848 | 0.9975 | 0.9941 |
| 0.0086 | 5.0 | 2545 | 0.0078 | 0.9907 | 0.9962 | 0.9969 | 0.9961 | 0.9853 | 0.9974 | 0.9951 |
| 0.0082 | 6.0 | 3054 | 0.0077 | 0.9908 | 0.9964 | 0.9969 | 0.9961 | 0.9855 | 0.9973 | 0.9956 |
| 0.0081 | 7.0 | 3563 | 0.0072 | 0.9914 | 0.9961 | 0.9971 | 0.9964 | 0.9864 | 0.9979 | 0.9944 |
| 0.0081 | 8.0 | 4072 | 0.0071 | 0.9915 | 0.9961 | 0.9972 | 0.9964 | 0.9866 | 0.9980 | 0.9942 |
| 0.0089 | 9.0 | 4581 | 0.0070 | 0.9916 | 0.9961 | 0.9972 | 0.9965 | 0.9868 | 0.9980 | 0.9941 |
| 0.0076 | 10.0 | 5090 | 0.0070 | 0.9917 | 0.9962 | 0.9972 | 0.9965 | 0.9869 | 0.9980 | 0.9943 |
* All values in the chart above are rounded to the nearest ten-thousandth.
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AkitoP/picelcopter
|
AkitoP
| 2023-08-03T04:35:01Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T04:34:12Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: picelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 40.80 +/- 24.12
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nguyenhuuanh/Test
|
nguyenhuuanh
| 2023-08-03T04:34:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-03T04:34:07Z |
---
license: creativeml-openrail-m
---
|
DUOMO-Lab/TransGPT-v0
|
DUOMO-Lab
| 2023-08-03T04:21:07Z | 67 | 11 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"chatglm",
"zh",
"Text2Text-Generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-19T01:27:19Z |
---
title: TransGPT-7b
emoji: 📚
colorFrom: gray
colorTo: red
language:
- zh
tags:
- chatglm
- pytorch
- zh
- Text2Text-Generation
license: "other"
widget:
- text: "我想了解如何申请和更新驾驶证?"
---
# TransGPT
**发布中文TransGPT(7B)模型**
test case:
|input_text|predict|
|:-- |:--- |
|我想了解如何申请和更新驾驶证?|你可以到当地的交通管理部门或者公安局办理相关手续。具体流程可以在官方网站上查询。|
# 文件校验
```
md5sum ./*
```
```
e618653f90f163928316858e95bd54d1 ./config.json
b1eb3650cbc84466fed263a9f0dff5e2 ./generation_config.json
570159d90b39554713e9702b9107928a ./pytorch_model-00001-of-00002.bin
8788671a726d25b192134909fb825e0b ./pytorch_model-00002-of-00002.bin
604e0ba32b2cb7df8d8a3d13bddc93fe ./pytorch_model.bin.index.json
413c7f9a8a6517c52c937eed27f18847 ./special_tokens_map.json
2ba2be903e87d7471bbc413e041e70e8 ./tokenizer_config.json
39afcc4541e7931ef0d561ac6e216586 ./tokenizer.model
```
## Usage
First, you pass your input through the transformer model, then you get the generated sentence.
Install package:
```
pip install sentencepiece
pip install transformers>=4.28.0
```
```python
import torch
import transformers
from transformers import LlamaTokenizer, LlamaForCausalLM
def generate_prompt(text):
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{text}
### Response:"""
checkpoint="DUOMO-Lab/TransGPT-v0"
tokenizer = LlamaTokenizer.from_pretrained(checkpoint)
model = LlamaForCausalLM.from_pretrained(checkpoint).half().cuda()
model.eval()
text = '我想了解如何申请和更新驾驶证?'
prompt = generate_prompt(text)
input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
with torch.no_grad():
output_ids = model.generate(
input_ids=input_ids,
max_new_tokens=1024,
temperature=1,
top_k=20,
top_p=0.9,
repetition_penalty=1.15
).cuda()
output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output.replace(text, '').strip())
```
output:
```shell
我想了解如何申请和更新驾驶证?
```
## 模型来源
release合并后的模型权重。
HuggingFace版本权重(.bin文件)可用于:
- 使用Transformers进行训练和推理
- 使用text-generation-webui搭建界面
PyTorch版本权重(.pth文件)可用于:
- 使用llama.cpp工具进行量化和部署
模型文件组成:
```
TransGPT
config.json
generation_config.json
pytorch_model-00001-of-00002.bin
pytorch_model-00002-of-00002.bin
pytorch_model.bin.index.json
special_tokens_map.json
tokenizer.json
tokenizer.model
tokenizer_config.json
```
硬件要求:14G显存
### 微调数据集
1. ~34.6万条文本数据集(用于领域内预训练):[DUOMO-Lab/TransGPT-pt](https://huggingface.co/datasets/DUOMO-Lab/TransGPT-pt)
2. ~5.6万条对话数据(用于微调):[finetune_data](https://huggingface.co/data/finetune)
如果需要训练LLaMA模型,请参考[https://github.com/DUOMO/TransGPT](https://github.com/DUOMO/TransGPT)
## Citation
```latex
@software{TransGPT,
author = {Wang Peng},
title = {DUOMO/TransGPT},
year = {2023},
url = {https://github.com/DUOMO/TransGPT},
}
```
## Reference
- https://github.com/shibing624/textgen
|
wx44wx/sd-class-butterflies-32
|
wx44wx
| 2023-08-03T04:03:59Z | 40 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-08-03T04:03:46Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('wx44wx/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Holmodi/a2c-AntBulletEnv-v0
|
Holmodi
| 2023-08-03T04:01:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T04:00:37Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1108.89 +/- 245.89
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tomoohive/PyramidTraining
|
tomoohive
| 2023-08-03T03:54:13Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-03T03:52:37Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tomoohive/PyramidTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
wilson-wei/speecht5_tts-finetuned-voxpopuli
|
wilson-wei
| 2023-08-03T03:47:31Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"nl",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-02T09:04:32Z |
---
language:
- nl
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_tts-finetuned-voxpopuli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_tts-finetuned-voxpopuli
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5204 | 4.3 | 1000 | 0.4779 |
| 0.4952 | 8.61 | 2000 | 0.4651 |
| 0.4942 | 12.91 | 3000 | 0.4614 |
| 0.4928 | 17.21 | 4000 | 0.4575 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.0
- Tokenizers 0.13.3
|
thanhnew2001/vn-bloom-7b1
|
thanhnew2001
| 2023-08-03T03:43:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T03:42:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
- PEFT 0.5.0.dev0
|
zacdennis/PyramidsRND
|
zacdennis
| 2023-08-03T03:43:13Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-03T03:43:10Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zacdennis/PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NasimB/bnc_spoken_gutenberg_fixed_log_rarity-mixed-seed
|
NasimB
| 2023-08-03T03:35:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-03T01:29:08Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bnc_spoken_gutenberg_fixed_log_rarity-mixed-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bnc_spoken_gutenberg_fixed_log_rarity-mixed-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3573 | 0.29 | 500 | 5.3408 |
| 5.0516 | 0.59 | 1000 | 4.9319 |
| 4.7229 | 0.88 | 1500 | 4.7012 |
| 4.4572 | 1.17 | 2000 | 4.5696 |
| 4.3138 | 1.46 | 2500 | 4.4428 |
| 4.2149 | 1.76 | 3000 | 4.3505 |
| 4.0893 | 2.05 | 3500 | 4.2888 |
| 3.8993 | 2.34 | 4000 | 4.2377 |
| 3.8964 | 2.63 | 4500 | 4.1837 |
| 3.8463 | 2.93 | 5000 | 4.1298 |
| 3.6523 | 3.22 | 5500 | 4.1291 |
| 3.6059 | 3.51 | 6000 | 4.1041 |
| 3.584 | 3.8 | 6500 | 4.0725 |
| 3.4873 | 4.1 | 7000 | 4.0728 |
| 3.334 | 4.39 | 7500 | 4.0694 |
| 3.3297 | 4.68 | 8000 | 4.0589 |
| 3.3185 | 4.97 | 8500 | 4.0454 |
| 3.1661 | 5.27 | 9000 | 4.0614 |
| 3.1518 | 5.56 | 9500 | 4.0607 |
| 3.1478 | 5.85 | 10000 | 4.0586 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
accuracy-maker/q-taxi
|
accuracy-maker
| 2023-08-03T03:15:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T03:15:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="chrisght/q-taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zwellington/clupubhealth-mini-test-3
|
zwellington
| 2023-08-03T03:12:26Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:clupubhealth",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-03T02:50:08Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
datasets:
- clupubhealth
metrics:
- rouge
model-index:
- name: clupubhealth-mini-test-3
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: clupubhealth
type: clupubhealth
config: mini
split: test
args: mini
metrics:
- name: Rouge1
type: rouge
value: 30.0438
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clupubhealth-mini-test-3
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the clupubhealth dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4191
- Rouge1: 30.0438
- Rouge2: 10.2364
- Rougel: 20.066
- Rougelsum: 20.1703
- Gen Len: 19.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 0.5 | 5 | 2.5258 | 25.7171 | 6.7541 | 16.6375 | 17.2026 | 20.0 |
| 3.0394 | 1.0 | 10 | 2.4191 | 30.0438 | 10.2364 | 20.066 | 20.1703 | 19.6 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
accuracy-maker/q-FrozenLake-v1-4x4-noSlippery
|
accuracy-maker
| 2023-08-03T03:12:02Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T03:12:00Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="chrisght/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
obong/xlm-roberta-base-finetuned-panx-en
|
obong
| 2023-08-03T03:10:51Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-03T02:51:09Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.683008356545961
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4019
- F1: 0.6830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1318 | 1.0 | 50 | 0.6082 | 0.5386 |
| 0.5111 | 2.0 | 100 | 0.4409 | 0.6474 |
| 0.3597 | 3.0 | 150 | 0.4019 | 0.6830 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
wuru330/378A1_results_coord
|
wuru330
| 2023-08-03T03:01:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-03T02:15:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 378A1_results_coord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 378A1_results_coord
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4924
- Accuracy: 0.8946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2529 | 1.0 | 37 | 1.0701 | 0.6207 |
| 0.6771 | 2.0 | 74 | 0.6678 | 0.7687 |
| 0.4363 | 3.0 | 111 | 0.5622 | 0.8010 |
| 0.2884 | 4.0 | 148 | 0.3808 | 0.8690 |
| 0.2382 | 5.0 | 185 | 0.3492 | 0.8810 |
| 0.1213 | 6.0 | 222 | 0.3485 | 0.8895 |
| 0.1238 | 7.0 | 259 | 0.4012 | 0.8827 |
| 0.0878 | 8.0 | 296 | 0.4311 | 0.8639 |
| 0.0839 | 9.0 | 333 | 0.4417 | 0.8656 |
| 0.0406 | 10.0 | 370 | 0.3993 | 0.8844 |
| 0.0509 | 11.0 | 407 | 0.4922 | 0.8690 |
| 0.0347 | 12.0 | 444 | 0.4840 | 0.8741 |
| 0.033 | 13.0 | 481 | 0.4572 | 0.8827 |
| 0.0222 | 14.0 | 518 | 0.4376 | 0.8861 |
| 0.0197 | 15.0 | 555 | 0.4397 | 0.8912 |
| 0.0179 | 16.0 | 592 | 0.4464 | 0.8946 |
| 0.0167 | 17.0 | 629 | 0.4526 | 0.8946 |
| 0.0154 | 18.0 | 666 | 0.4588 | 0.8929 |
| 0.0148 | 19.0 | 703 | 0.4642 | 0.8929 |
| 0.0135 | 20.0 | 740 | 0.4691 | 0.8929 |
| 0.0131 | 21.0 | 777 | 0.4732 | 0.8946 |
| 0.0125 | 22.0 | 814 | 0.4776 | 0.8946 |
| 0.0119 | 23.0 | 851 | 0.4809 | 0.8946 |
| 0.0116 | 24.0 | 888 | 0.4841 | 0.8946 |
| 0.0112 | 25.0 | 925 | 0.4863 | 0.8946 |
| 0.0111 | 26.0 | 962 | 0.4885 | 0.8946 |
| 0.0108 | 27.0 | 999 | 0.4903 | 0.8946 |
| 0.0108 | 28.0 | 1036 | 0.4912 | 0.8946 |
| 0.0105 | 29.0 | 1073 | 0.4921 | 0.8946 |
| 0.0108 | 30.0 | 1110 | 0.4924 | 0.8946 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Austism/chronos-hermes-13b-v2-GPTQ
|
Austism
| 2023-08-03T03:00:23Z | 13 | 15 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"llama-2",
"pytorch",
"chatbot",
"storywriting",
"generalist-model",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-03T00:31:31Z |
---
license: other
tags:
- llama
- llama-2
- pytorch
- chatbot
- storywriting
- generalist-model
---
# chronos-hermes-13b-v2-GPTQ
([chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2) + [Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)) 75/25 merge
4bit (int4) 128g quantization
- [FP16 HF Weights](https://huggingface.co/Austism/chronos-hermes-13b-v2)
## Prompt Format
```
### Instruction:
<prompt>
### Response:
```
This is an adaption of [chronos-hermes-13b](https://huggingface.co/Austism/chronos-hermes-13b) for llama-2.
|
w11wo/indonesian-roberta-base-prdect-id
|
w11wo
| 2023-08-03T02:53:09Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"indonesian-roberta-base-prdect-id",
"id",
"dataset:prdect-id",
"arxiv:1907.11692",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-24T12:52:29Z |
---
language: id
tags:
- indonesian-roberta-base-prdect-id
license: apache-2.0
datasets:
- prdect-id
widget:
- text: "Wah, kualitas produk ini sangat bagus!"
---
## Indonesian RoBERTa Base PRDECT-ID
Indonesian RoBERTa Base PRDECT-ID is a emotion text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on the [`PRDECT-ID`](https://doi.org/10.1016/j.dib.2022.108554) dataset consisting of Indonesian product reviews (Sutoyo et al., 2022).
This model was trained using HuggingFace's PyTorch framework. All training was done on a NVIDIA T4, provided by Google Colaboratory. [Training metrics](https://huggingface.co/w11wo/indonesian-roberta-base-prdect-id/tensorboard) were logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ----------------------------------- | ------- | ------------ | ------------------------------- |
| `indonesian-roberta-base-prdect-id` | 124M | RoBERTa Base | `PRDECT-ID` |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Accuracy | F1 | Precision | Recall |
| ----------- | -------- | -------- | --------- | -------- |
| `PRDECT-ID` | 0.685185 | 0.644750 | 0.646400 | 0.643710 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 2e-05
- `train_batch_size`: 32
- `eval_batch_size`: 32
- `seed`: 42
- `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `num_epochs`: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
| :-----------: | :---: | :---: | :-------------: | :------: | :----: | :-------: | :----: |
| 1.0358 | 1.0 | 152 | 0.8293 | 0.6519 | 0.5814 | 0.6399 | 0.5746 |
| 0.7012 | 2.0 | 304 | 0.7444 | 0.6741 | 0.6269 | 0.6360 | 0.6220 |
| 0.5599 | 3.0 | 456 | 0.7635 | 0.6852 | 0.6440 | 0.6433 | 0.6453 |
| 0.4628 | 4.0 | 608 | 0.8031 | 0.6852 | 0.6421 | 0.6471 | 0.6396 |
| 0.4027 | 5.0 | 760 | 0.8133 | 0.6852 | 0.6447 | 0.6464 | 0.6437 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/indonesian-roberta-base-prdect-id"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Wah, kualitas produk ini sangat bagus!")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `PRDECT-ID` dataset that may be carried over into the results of this model.
## Author
Indonesian RoBERTa Base PRDECT-ID was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
## References
```bib
@article{SUTOYO2022108554,
title = {PRDECT-ID: Indonesian product reviews dataset for emotions classification tasks},
journal = {Data in Brief},
volume = {44},
pages = {108554},
year = {2022},
issn = {2352-3409},
doi = {https://doi.org/10.1016/j.dib.2022.108554},
url = {https://www.sciencedirect.com/science/article/pii/S2352340922007612},
author = {Rhio Sutoyo and Said Achmad and Andry Chowanda and Esther Widhi Andangsari and Sani M. Isa},
keywords = {Natural language processing, Text processing, Text mining, Emotions classification, Sentiment analysis},
abstract = {Recognizing emotions is vital in communication. Emotions convey additional meanings to the communication process. Nowadays, people can communicate their emotions on many platforms; one is the product review. Product reviews in the online platform are an important element that affects customers’ buying decisions. Hence, it is essential to recognize emotions from the product reviews. Emotions recognition from the product reviews can be done automatically using a machine or deep learning algorithm. Dataset can be considered as the fuel to model the recognizer. However, only a limited dataset exists in recognizing emotions from the product reviews, particularly in a local language. This research contributes to the dataset collection of 5400 product reviews in Indonesian. It was carefully curated from various (29) product categories, annotated with five emotions, and verified by an expert in clinical psychology. The dataset supports an innovative process to build automatic emotion classification on product reviews.}
}
```
|
RUCAIBox/Erya
|
RUCAIBox
| 2023-08-03T02:41:13Z | 124 | 8 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"translation",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-21T06:09:18Z |
---
license: apache-2.0
pipeline_tag: translation
language:
- zh
---
# Model Description
Erya is a pretrained model specifically designed for translating Ancient Chinese into Modern Chinese. It utilizes an Encoder-Decoder architecture and has been trained using a combination of DMLM (Dual Masked Language Model) and DAS (Disyllabic Aligned Substitution) techniques on datasets comprising both Ancient Chinese and Modern Chinese texts. The detailed information of our work can be found here: [RUCAIBox/Erya (github.com)](https://github.com/RUCAIBox/Erya)
More information about Erya dataset can be found here: [RUCAIBox/Erya-dataset · Datasets at Hugging Face](https://huggingface.co/datasets/RUCAIBox/Erya-dataset), which can be used to tune the Erya model further for a better translation performance.
# Example
```python
>>> from transformers import BertTokenizer, CPTForConditionalGeneration
>>> tokenizer = BertTokenizer.from_pretrained("RUCAIBox/Erya")
>>> model = CPTForConditionalGeneration.from_pretrained("RUCAIBox/Erya")
>>> input_ids = tokenizer("安世字子孺,少以父任为郎。", return_tensors='pt')
>>> input_ids.pop("token_type_ids")
>>> pred_ids = model.generate(max_new_tokens=256, **input_ids)
>>> print(tokenizer.batch_decode(pred_ids, skip_special_tokens=True))
['安 世 字 子 孺 , 年 轻 时 因 父 任 郎 官 。']
```
|
RUCAIBox/Erya4FT
|
RUCAIBox
| 2023-08-03T02:39:19Z | 125 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"translation",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-29T12:15:15Z |
---
license: apache-2.0
language:
- zh
metrics:
- bleu
pipeline_tag: translation
widget:
- text: "竖子不足与谋。"
example_title: "translation"
---
# Model Description
Erya4FT is based on [Erya](https://huggingface.co/RUCAIBox/Erya) and further fine-tuned on our [Dataset](https://huggingface.co/datasets/RUCAIBox/Erya-dataset), enhancing the ability to translate ancient Chinese into Modern Chinese.
# Example
```python
>>> from transformers import BertTokenizer, CPTForConditionalGeneration
>>> tokenizer = BertTokenizer.from_pretrained("RUCAIBox/Erya4FT")
>>> model = CPTForConditionalGeneration.from_pretrained("RUCAIBox/Erya4FT")
>>> input_ids = tokenizer("竖子不足与谋。", return_tensors='pt')
>>> input_ids.pop("token_type_ids")
>>> pred_ids = model.generate(max_new_tokens=256, **input_ids)
>>> print(tokenizer.batch_decode(pred_ids, skip_special_tokens=True))
['这 小 子 不 值 得 与 他 商 量 。']
```
|
Mel-Iza0/Llama2-13B_ZeroShot-20K_classe_nenhuma
|
Mel-Iza0
| 2023-08-03T02:16:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-01T21:07:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
obong/xlm-roberta-base-finetuned-panx-de-fr
|
obong
| 2023-08-03T02:09:36Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-03T01:45:54Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2907 | 1.0 | 715 | 0.1899 | 0.8204 |
| 0.1477 | 2.0 | 1430 | 0.1578 | 0.8509 |
| 0.0934 | 3.0 | 2145 | 0.1608 | 0.8609 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
CobraMamba/mamba-gpt-3b-v3
|
CobraMamba
| 2023-08-03T01:55:05Z | 1,403 | 18 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-28T07:45:24Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
---
# Model Card
**The Best 3B Model! Surpassing dolly-v2-12b**
The best 3B model on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), with performance surpassing dolly-v2-12b
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 27.3 |
| ARC (25-shot) | 41.7 |
| HellaSwag (10-shot) | 71.1 |
| TruthfulQA (0-shot) | 37.9 |
| Avg. | 44.5 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above.
The training code and data will be open sourced later on Github(https://github.com/chi2liu/mamba-gpt-3b)
## Training Dataset
` mamba-gpt-3b-v3 ` is trained on multiply dataset:
- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
## Summary
We have fine-tuned the open-lama model and surpassed the original model in multiple evaluation subtasks, making it currently the best performing 3B model with comparable performance to llama-7b
- Base model: [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("CobraMamba/mamba-gpt-3b-v3")
model = AutoModelForCausalLM.from_pretrained("CobraMamba/mamba-gpt-3b-v3", trust_remote_code=True, torch_dtype=torch.float16)
input_context = "Your text here"
input_ids = tokenizer.encode(input_context, return_tensors="pt")
output = model.generate(input_ids, max_length=128, temperature=0.7)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Citation
If this work is helpful, please kindly cite as:
```bibtex
@Misc{mamba-gpt-3b-v3,
title = {Mamba-GPT-3b-v3},
author = {chiliu},
howpublished = {\url{https://huggingface.co/CobraMamba/mamba-gpt-3b-v3}},
year = {2023}
}
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
tomoohive/ppo-SnowballTarget
|
tomoohive
| 2023-08-03T01:44:06Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-03T01:44:00Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tomoohive/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
arpan-das-astrophysics/Pixelcopter-PLE-v0
|
arpan-das-astrophysics
| 2023-08-03T01:22:30Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T01:22:24Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 44.60 +/- 32.54
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kingbri/airolima-l2-13b-gpt4-2.0-GPTQ
|
kingbri
| 2023-08-03T01:19:45Z | 7 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T21:00:36Z |
---
language:
- en
---
This is a GPTQ quantized version of [airolima-l2-13b-gpt4-2.0](https://huggingface.co/kingbri/airolima-l2-13b-gpt4-2.0)
Please refer to the original creator for more information.
Branches:
- main: 4 bits, groupsize 128, act order false
- 4bit-128g-actorder: 4 bits, groupsize 128, act order true
- 4bit-32g-actorder: 4 bits, groupsize 32, act order true
|
NasimB/bnc_spoken-gutenberg_fixed-notm-log-rarity-seed
|
NasimB
| 2023-08-03T01:05:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T22:59:21Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bnc_spoken-gutenberg_fixed-notm-log-rarity-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bnc_spoken-gutenberg_fixed-notm-log-rarity-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3562 | 0.29 | 500 | 5.3378 |
| 5.0485 | 0.59 | 1000 | 4.9330 |
| 4.7138 | 0.88 | 1500 | 4.6931 |
| 4.4572 | 1.17 | 2000 | 4.5600 |
| 4.2993 | 1.46 | 2500 | 4.4414 |
| 4.2061 | 1.76 | 3000 | 4.3378 |
| 4.0947 | 2.05 | 3500 | 4.2700 |
| 3.904 | 2.34 | 4000 | 4.2282 |
| 3.8836 | 2.63 | 4500 | 4.1750 |
| 3.8332 | 2.93 | 5000 | 4.1240 |
| 3.6514 | 3.22 | 5500 | 4.1265 |
| 3.5995 | 3.51 | 6000 | 4.0987 |
| 3.5799 | 3.8 | 6500 | 4.0673 |
| 3.4799 | 4.1 | 7000 | 4.0650 |
| 3.3295 | 4.39 | 7500 | 4.0630 |
| 3.3246 | 4.68 | 8000 | 4.0518 |
| 3.3137 | 4.97 | 8500 | 4.0386 |
| 3.1584 | 5.27 | 9000 | 4.0551 |
| 3.1444 | 5.56 | 9500 | 4.0539 |
| 3.1446 | 5.85 | 10000 | 4.0532 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
zslrmhb/a2c-AntBulletEnv-v0
|
zslrmhb
| 2023-08-03T00:47:31Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-03T00:46:25Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1212.35 +/- 417.53
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
obong/xlm-roberta-base-finetuned-panx-de
|
obong
| 2023-08-03T00:38:26Z | 132 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-03T00:11:56Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8616294947655895
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1366
- F1: 0.8616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2571 | 1.0 | 525 | 0.1521 | 0.8216 |
| 0.1263 | 2.0 | 1050 | 0.1448 | 0.8512 |
| 0.0811 | 3.0 | 1575 | 0.1366 | 0.8616 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
simonycl/roberta-large-sst-2-64-13
|
simonycl
| 2023-08-03T00:35:50Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T21:12:58Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-large-sst-2-64-13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-sst-2-64-13
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7488
- Accuracy: 0.9141
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 0.7118 | 0.5 |
| No log | 2.0 | 8 | 0.7101 | 0.5 |
| 0.7289 | 3.0 | 12 | 0.7072 | 0.5 |
| 0.7289 | 4.0 | 16 | 0.7042 | 0.5 |
| 0.6989 | 5.0 | 20 | 0.6999 | 0.5 |
| 0.6989 | 6.0 | 24 | 0.6966 | 0.5 |
| 0.6989 | 7.0 | 28 | 0.6938 | 0.5 |
| 0.6959 | 8.0 | 32 | 0.6938 | 0.5 |
| 0.6959 | 9.0 | 36 | 0.6990 | 0.4766 |
| 0.6977 | 10.0 | 40 | 0.6931 | 0.5 |
| 0.6977 | 11.0 | 44 | 0.6854 | 0.5156 |
| 0.6977 | 12.0 | 48 | 0.6882 | 0.6016 |
| 0.6514 | 13.0 | 52 | 0.6495 | 0.7578 |
| 0.6514 | 14.0 | 56 | 0.5930 | 0.7656 |
| 0.5232 | 15.0 | 60 | 0.5280 | 0.8203 |
| 0.5232 | 16.0 | 64 | 0.4286 | 0.875 |
| 0.5232 | 17.0 | 68 | 0.2916 | 0.8906 |
| 0.2793 | 18.0 | 72 | 0.3444 | 0.9141 |
| 0.2793 | 19.0 | 76 | 0.4673 | 0.8984 |
| 0.0537 | 20.0 | 80 | 0.4232 | 0.9062 |
| 0.0537 | 21.0 | 84 | 0.4351 | 0.9297 |
| 0.0537 | 22.0 | 88 | 0.5124 | 0.9297 |
| 0.0032 | 23.0 | 92 | 0.4585 | 0.9375 |
| 0.0032 | 24.0 | 96 | 0.5067 | 0.9219 |
| 0.0016 | 25.0 | 100 | 0.5244 | 0.9375 |
| 0.0016 | 26.0 | 104 | 0.7050 | 0.9141 |
| 0.0016 | 27.0 | 108 | 0.5847 | 0.9297 |
| 0.0004 | 28.0 | 112 | 0.5744 | 0.9297 |
| 0.0004 | 29.0 | 116 | 0.5828 | 0.9375 |
| 0.0001 | 30.0 | 120 | 0.5884 | 0.9375 |
| 0.0001 | 31.0 | 124 | 0.5931 | 0.9375 |
| 0.0001 | 32.0 | 128 | 0.5983 | 0.9375 |
| 0.0001 | 33.0 | 132 | 0.6038 | 0.9375 |
| 0.0001 | 34.0 | 136 | 0.6076 | 0.9375 |
| 0.0001 | 35.0 | 140 | 0.6083 | 0.9375 |
| 0.0001 | 36.0 | 144 | 0.7169 | 0.9219 |
| 0.0001 | 37.0 | 148 | 0.6166 | 0.9375 |
| 0.0336 | 38.0 | 152 | 0.8108 | 0.9141 |
| 0.0336 | 39.0 | 156 | 0.7454 | 0.9141 |
| 0.0348 | 40.0 | 160 | 0.6944 | 0.9141 |
| 0.0348 | 41.0 | 164 | 0.7467 | 0.9141 |
| 0.0348 | 42.0 | 168 | 0.6764 | 0.9141 |
| 0.0402 | 43.0 | 172 | 0.6839 | 0.9219 |
| 0.0402 | 44.0 | 176 | 0.7118 | 0.9219 |
| 0.0002 | 45.0 | 180 | 0.6943 | 0.9219 |
| 0.0002 | 46.0 | 184 | 0.7469 | 0.9141 |
| 0.0002 | 47.0 | 188 | 0.7264 | 0.9219 |
| 0.0001 | 48.0 | 192 | 0.7112 | 0.9219 |
| 0.0001 | 49.0 | 196 | 0.6948 | 0.9219 |
| 0.0001 | 50.0 | 200 | 0.8408 | 0.9062 |
| 0.0001 | 51.0 | 204 | 0.7876 | 0.9141 |
| 0.0001 | 52.0 | 208 | 0.7271 | 0.9219 |
| 0.0001 | 53.0 | 212 | 0.8016 | 0.9141 |
| 0.0001 | 54.0 | 216 | 0.8336 | 0.9062 |
| 0.0148 | 55.0 | 220 | 0.7701 | 0.9219 |
| 0.0148 | 56.0 | 224 | 0.8717 | 0.9062 |
| 0.0148 | 57.0 | 228 | 0.8018 | 0.9141 |
| 0.0001 | 58.0 | 232 | 0.8777 | 0.9062 |
| 0.0001 | 59.0 | 236 | 0.9158 | 0.9062 |
| 0.0001 | 60.0 | 240 | 0.9356 | 0.8984 |
| 0.0001 | 61.0 | 244 | 0.7494 | 0.9062 |
| 0.0001 | 62.0 | 248 | 0.6708 | 0.9219 |
| 0.0298 | 63.0 | 252 | 0.6649 | 0.9141 |
| 0.0298 | 64.0 | 256 | 0.7463 | 0.9062 |
| 0.0285 | 65.0 | 260 | 0.8065 | 0.8984 |
| 0.0285 | 66.0 | 264 | 0.8267 | 0.9062 |
| 0.0285 | 67.0 | 268 | 0.8447 | 0.8984 |
| 0.0001 | 68.0 | 272 | 0.8409 | 0.8984 |
| 0.0001 | 69.0 | 276 | 0.6652 | 0.9219 |
| 0.0005 | 70.0 | 280 | 0.6507 | 0.9219 |
| 0.0005 | 71.0 | 284 | 0.6889 | 0.9062 |
| 0.0005 | 72.0 | 288 | 0.6652 | 0.9062 |
| 0.0296 | 73.0 | 292 | 0.6454 | 0.9062 |
| 0.0296 | 74.0 | 296 | 0.6368 | 0.9062 |
| 0.0002 | 75.0 | 300 | 0.6396 | 0.9062 |
| 0.0002 | 76.0 | 304 | 0.6505 | 0.9062 |
| 0.0002 | 77.0 | 308 | 0.6620 | 0.9062 |
| 0.0002 | 78.0 | 312 | 0.6734 | 0.9062 |
| 0.0002 | 79.0 | 316 | 0.6846 | 0.9062 |
| 0.0002 | 80.0 | 320 | 0.6951 | 0.9062 |
| 0.0002 | 81.0 | 324 | 0.7038 | 0.9062 |
| 0.0002 | 82.0 | 328 | 0.7116 | 0.9062 |
| 0.0002 | 83.0 | 332 | 0.7187 | 0.9062 |
| 0.0002 | 84.0 | 336 | 0.7250 | 0.9062 |
| 0.0002 | 85.0 | 340 | 0.6930 | 0.9141 |
| 0.0002 | 86.0 | 344 | 0.6856 | 0.9219 |
| 0.0002 | 87.0 | 348 | 0.7474 | 0.9141 |
| 0.0227 | 88.0 | 352 | 0.6506 | 0.9219 |
| 0.0227 | 89.0 | 356 | 0.6457 | 0.9219 |
| 0.0001 | 90.0 | 360 | 0.7022 | 0.9141 |
| 0.0001 | 91.0 | 364 | 0.7275 | 0.9062 |
| 0.0001 | 92.0 | 368 | 0.7375 | 0.9141 |
| 0.0001 | 93.0 | 372 | 0.8008 | 0.9062 |
| 0.0001 | 94.0 | 376 | 0.6855 | 0.9141 |
| 0.0053 | 95.0 | 380 | 0.5869 | 0.9375 |
| 0.0053 | 96.0 | 384 | 0.6060 | 0.9297 |
| 0.0053 | 97.0 | 388 | 0.5990 | 0.9297 |
| 0.0001 | 98.0 | 392 | 0.6250 | 0.9141 |
| 0.0001 | 99.0 | 396 | 0.6505 | 0.9141 |
| 0.0001 | 100.0 | 400 | 0.6577 | 0.9141 |
| 0.0001 | 101.0 | 404 | 0.6594 | 0.9141 |
| 0.0001 | 102.0 | 408 | 0.6602 | 0.9141 |
| 0.0001 | 103.0 | 412 | 0.6610 | 0.9219 |
| 0.0001 | 104.0 | 416 | 0.6622 | 0.9141 |
| 0.037 | 105.0 | 420 | 0.6055 | 0.9297 |
| 0.037 | 106.0 | 424 | 0.5915 | 0.9297 |
| 0.037 | 107.0 | 428 | 0.6261 | 0.9297 |
| 0.0001 | 108.0 | 432 | 0.6679 | 0.9219 |
| 0.0001 | 109.0 | 436 | 0.7106 | 0.9219 |
| 0.0001 | 110.0 | 440 | 0.7223 | 0.9219 |
| 0.0001 | 111.0 | 444 | 0.7267 | 0.9141 |
| 0.0001 | 112.0 | 448 | 0.7287 | 0.9141 |
| 0.0001 | 113.0 | 452 | 0.7298 | 0.9141 |
| 0.0001 | 114.0 | 456 | 0.7306 | 0.9141 |
| 0.0001 | 115.0 | 460 | 0.7314 | 0.9141 |
| 0.0001 | 116.0 | 464 | 0.7323 | 0.9141 |
| 0.0001 | 117.0 | 468 | 0.7333 | 0.9141 |
| 0.0001 | 118.0 | 472 | 0.7342 | 0.9141 |
| 0.0001 | 119.0 | 476 | 0.7351 | 0.9141 |
| 0.0001 | 120.0 | 480 | 0.7359 | 0.9141 |
| 0.0001 | 121.0 | 484 | 0.7369 | 0.9141 |
| 0.0001 | 122.0 | 488 | 0.7379 | 0.9141 |
| 0.0001 | 123.0 | 492 | 0.7388 | 0.9141 |
| 0.0001 | 124.0 | 496 | 0.7396 | 0.9141 |
| 0.0001 | 125.0 | 500 | 0.7403 | 0.9141 |
| 0.0001 | 126.0 | 504 | 0.7410 | 0.9141 |
| 0.0001 | 127.0 | 508 | 0.7417 | 0.9141 |
| 0.0001 | 128.0 | 512 | 0.7423 | 0.9141 |
| 0.0001 | 129.0 | 516 | 0.7429 | 0.9141 |
| 0.0001 | 130.0 | 520 | 0.7435 | 0.9141 |
| 0.0001 | 131.0 | 524 | 0.7440 | 0.9141 |
| 0.0001 | 132.0 | 528 | 0.7446 | 0.9141 |
| 0.0001 | 133.0 | 532 | 0.7450 | 0.9141 |
| 0.0001 | 134.0 | 536 | 0.7455 | 0.9141 |
| 0.0001 | 135.0 | 540 | 0.7459 | 0.9141 |
| 0.0001 | 136.0 | 544 | 0.7463 | 0.9141 |
| 0.0001 | 137.0 | 548 | 0.7466 | 0.9141 |
| 0.0001 | 138.0 | 552 | 0.7470 | 0.9141 |
| 0.0001 | 139.0 | 556 | 0.7473 | 0.9141 |
| 0.0001 | 140.0 | 560 | 0.7475 | 0.9141 |
| 0.0001 | 141.0 | 564 | 0.7478 | 0.9141 |
| 0.0001 | 142.0 | 568 | 0.7480 | 0.9141 |
| 0.0001 | 143.0 | 572 | 0.7482 | 0.9141 |
| 0.0001 | 144.0 | 576 | 0.7483 | 0.9141 |
| 0.0001 | 145.0 | 580 | 0.7485 | 0.9141 |
| 0.0001 | 146.0 | 584 | 0.7486 | 0.9141 |
| 0.0001 | 147.0 | 588 | 0.7487 | 0.9141 |
| 0.0001 | 148.0 | 592 | 0.7488 | 0.9141 |
| 0.0001 | 149.0 | 596 | 0.7488 | 0.9141 |
| 0.0001 | 150.0 | 600 | 0.7488 | 0.9141 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
shanover/medbot-conv
|
shanover
| 2023-08-03T00:27:22Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-02T21:19:02Z |
---
license: mit
tags:
- conversational
---
|
nhat117/dica-7b-long-llama2-epochs-sft
|
nhat117
| 2023-08-03T00:19:26Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-03T00:14:38Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
flax-community/t5-recipe-generation
|
flax-community
| 2023-08-03T00:04:15Z | 1,797 | 62 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"seq2seq",
"text-generation",
"recipe-generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- seq2seq
- t5
- text-generation
- recipe-generation
pipeline_tag: text2text-generation
widget:
- text: "provolone cheese, bacon, bread, ginger"
- text: "sugar, crunchy jif peanut butter, cornflakes"
- text: "sweet butter, confectioners sugar, flaked coconut, condensed milk, nuts, vanilla, dipping chocolate"
- text: "macaroni, butter, salt, bacon, milk, flour, pepper, cream corn"
- text: "hamburger, sausage, onion, regular, american cheese, colby cheese"
- text: "chicken breasts, onion, garlic, great northern beans, black beans, green chilies, broccoli, garlic oil, butter, cajun seasoning, salt, oregano, thyme, black pepper, basil, worcestershire sauce, chicken broth, sour cream, chardonnay wine"
- text: "serrano peppers, garlic, celery, oregano, canola oil, vinegar, water, kosher salt, salt, black pepper"
---

# Chef Transformer (T5)
> This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/recipe-generation-model/7475), organized by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
Want to give it a try? Then what's the wait, head over to Hugging Face Spaces [here](https://huggingface.co/spaces/flax-community/chef-transformer).
## Team Members
- Mehrdad Farahani ([m3hrdadfi](https://huggingface.co/m3hrdadfi))
- Kartik Godawat ([dk-crazydiv](https://huggingface.co/dk-crazydiv))
- Haswanth Aekula ([hassiahk](https://huggingface.co/hassiahk))
- Deepak Pandian ([rays2pix](https://huggingface.co/rays2pix))
- Nicholas Broad ([nbroad](https://huggingface.co/nbroad))
## Dataset
[RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation](https://recipenlg.cs.put.poznan.pl/). This dataset contains **2,231,142** cooking recipes (>2 millions) with size of **2.14 GB**. It's processed in more careful way.
### Example
```json
{
"NER": [
"oyster crackers",
"salad dressing",
"lemon pepper",
"dill weed",
"garlic powder",
"salad oil"
],
"directions": [
"Combine salad dressing mix and oil.",
"Add dill weed, garlic powder and lemon pepper.",
"Pour over crackers; stir to coat.",
"Place in warm oven.",
"Use very low temperature for 15 to 20 minutes."
],
"ingredients": [
"12 to 16 oz. plain oyster crackers",
"1 pkg. Hidden Valley Ranch salad dressing mix",
"1/4 tsp. lemon pepper",
"1/2 to 1 tsp. dill weed",
"1/4 tsp. garlic powder",
"3/4 to 1 c. salad oil"
],
"link": "www.cookbooks.com/Recipe-Details.aspx?id=648947",
"source": "Gathered",
"title": "Hidden Valley Ranch Oyster Crackers"
}
```
## How To Use
```bash
# Installing requirements
pip install transformers
```
```python
from transformers import FlaxAutoModelForSeq2SeqLM
from transformers import AutoTokenizer
MODEL_NAME_OR_PATH = "flax-community/t5-recipe-generation"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME_OR_PATH, use_fast=True)
model = FlaxAutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME_OR_PATH)
prefix = "items: "
# generation_kwargs = {
# "max_length": 512,
# "min_length": 64,
# "no_repeat_ngram_size": 3,
# "early_stopping": True,
# "num_beams": 5,
# "length_penalty": 1.5,
# }
generation_kwargs = {
"max_length": 512,
"min_length": 64,
"no_repeat_ngram_size": 3,
"do_sample": True,
"top_k": 60,
"top_p": 0.95
}
special_tokens = tokenizer.all_special_tokens
tokens_map = {
"<sep>": "--",
"<section>": "\n"
}
def skip_special_tokens(text, special_tokens):
for token in special_tokens:
text = text.replace(token, "")
return text
def target_postprocessing(texts, special_tokens):
if not isinstance(texts, list):
texts = [texts]
new_texts = []
for text in texts:
text = skip_special_tokens(text, special_tokens)
for k, v in tokens_map.items():
text = text.replace(k, v)
new_texts.append(text)
return new_texts
def generation_function(texts):
_inputs = texts if isinstance(texts, list) else [texts]
inputs = [prefix + inp for inp in _inputs]
inputs = tokenizer(
inputs,
max_length=256,
padding="max_length",
truncation=True,
return_tensors="jax"
)
input_ids = inputs.input_ids
attention_mask = inputs.attention_mask
output_ids = model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
**generation_kwargs
)
generated = output_ids.sequences
generated_recipe = target_postprocessing(
tokenizer.batch_decode(generated, skip_special_tokens=False),
special_tokens
)
return generated_recipe
```
```python
items = [
"macaroni, butter, salt, bacon, milk, flour, pepper, cream corn",
"provolone cheese, bacon, bread, ginger"
]
generated = generation_function(items)
for text in generated:
sections = text.split("\n")
for section in sections:
section = section.strip()
if section.startswith("title:"):
section = section.replace("title:", "")
headline = "TITLE"
elif section.startswith("ingredients:"):
section = section.replace("ingredients:", "")
headline = "INGREDIENTS"
elif section.startswith("directions:"):
section = section.replace("directions:", "")
headline = "DIRECTIONS"
if headline == "TITLE":
print(f"[{headline}]: {section.strip().capitalize()}")
else:
section_info = [f" - {i+1}: {info.strip().capitalize()}" for i, info in enumerate(section.split("--"))]
print(f"[{headline}]:")
print("\n".join(section_info))
print("-" * 130)
```
Output:
```text
[TITLE]: Macaroni and corn
[INGREDIENTS]:
- 1: 2 c. macaroni
- 2: 2 tbsp. butter
- 3: 1 tsp. salt
- 4: 4 slices bacon
- 5: 2 c. milk
- 6: 2 tbsp. flour
- 7: 1/4 tsp. pepper
- 8: 1 can cream corn
[DIRECTIONS]:
- 1: Cook macaroni in boiling salted water until tender.
- 2: Drain.
- 3: Melt butter in saucepan.
- 4: Blend in flour, salt and pepper.
- 5: Add milk all at once.
- 6: Cook and stir until thickened and bubbly.
- 7: Stir in corn and bacon.
- 8: Pour over macaroni and mix well.
----------------------------------------------------------------------------------------------------------------------------------
[TITLE]: Grilled provolone and bacon sandwich
[INGREDIENTS]:
- 1: 2 slices provolone cheese
- 2: 2 slices bacon
- 3: 2 slices sourdough bread
- 4: 2 slices pickled ginger
[DIRECTIONS]:
- 1: Place a slice of provolone cheese on one slice of bread.
- 2: Top with a slice of bacon.
- 3: Top with a slice of pickled ginger.
- 4: Top with the other slice of bread.
- 5: Heat a skillet over medium heat.
- 6: Place the sandwich in the skillet and cook until the cheese is melted and the bread is golden brown.
----------------------------------------------------------------------------------------------------------------------------------
```
## Evaluation
Since the test set is not available, we will evaluate the model based on a shared test set. This test set consists of 5% of the whole test (*= 5,000 records*),
and we will generate five recipes for each input(*= 25,000 records*).
The following table summarizes the scores obtained by the **Chef Transformer** and **RecipeNLG** as our baseline.
| Model | COSIM | WER | ROUGE-2 | BLEU | GLEU | METEOR |
|:------------------------------------------------------------------------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|
| [RecipeNLG](https://huggingface.co/mbien/recipenlg) | 0.5723 | 1.2125 | 0.1354 | 0.1164 | 0.1503 | 0.2309 |
| [Chef Transformer](huggingface.co/flax-community/t5-recipe-generation) * | **0.7282** | **0.7613** | **0.2470** | **0.3245** | **0.2624** | **0.4150** |
*From the 5 generated recipes corresponding to each NER (food items), only the highest score was taken into account in the WER, COSIM, and ROUGE metrics. At the same time, BLEU, GLEU, Meteor were designed to have many possible references.*
## Copyright
Special thanks to those who provided these fantastic materials.
- [Anatomy](https://www.flaticon.com/free-icon)
- [Chef Hat](https://www.vecteezy.com/members/jellyfishwater)
- [Moira Nazzari](https://pixabay.com/photos/food-dessert-cake-eggs-butter-3048440/)
- [Instagram Post](https://www.freepik.com/free-psd/recipes-ad-social-media-post-template_11520617.htm)
|
johnpaulbin/autotrain-spam-39547103148
|
johnpaulbin
| 2023-08-02T23:23:07Z | 110 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:johnpaulbin/autotrain-data-spam",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-07T20:10:17Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- johnpaulbin/autotrain-data-spam
co2_eq_emissions:
emissions: 1.3372976003843626
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 39547103148
- CO2 Emissions (in grams): 1.3373
## Validation Metrics
- Loss: 0.002
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/johnpaulbin/autotrain-spam-39547103148
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("johnpaulbin/autotrain-spam-39547103148", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("johnpaulbin/autotrain-spam-39547103148", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
NovelAI/nerdstash-tokenizer-v1
|
NovelAI
| 2023-08-02T23:11:38Z | 0 | 6 | null |
[
"tokenizer",
"novelai",
"sentencepiece",
"en",
"ja",
"license:gpl-2.0",
"region:us"
] | null | 2023-08-02T22:50:10Z |
---
license: gpl-2.0
language:
- en
- ja
tags:
- tokenizer
- novelai
- sentencepiece
---
# Tokenizer
Finetune here to talk a bit about [NovelAI](https://novelai.net/)'s new tokenizer that I worked on. First a quick reminder. In most cases, our models don't see words as individual letters. Instead, text is broken down into tokens, which are words or word fragments. For example, the sentence “`The quick brown fox jumps over the goblin.`” would tokenize as “`The| quick| brown| fox| jumps| over| the| go|bl|in.`” in the Pile tokenizer used by GPT-NeoX 20B and Krake, with each | signifying a boundary between tokens.
When deciding on a tokenizer for a model, there are various criteria to consider. The first and most obvious is the vocabulary size. It may be tempting to just set it very high to ensure that every word or even multiple words, as in the case of the tokenizer used by AI21's Jurassic models, gets its own distinct token. However, this has the drawback that the model will be less able to generalize. That means it will not be able to make use of meaningful patterns in how words are spelt, such as similarities between words ending in “-ize”. It will also be less robust against misspellings. At the same time, the vocabulary of the tokenizer should not be too small. Common words should have their own token. The same goes for Unicode characters that are likely to show up in tokenized texts, because otherwise they will have to be constructed by the model byte-by-byte, which is much harder for the model. A good trade-off with regards to vocabulary size is around 32000 tokens for a single language vocabulary. This also has the benefit of fitting easily within 16 bits, which makes handling tokenized data easier in many cases.
The type of tokenizer is another important decision to make. Unigram tokenizers have been shown to produce much more meaningful tokenizations of words, while so far the predominantly used tokenizer type for large language models (LLM) is BPE (byte pair encoding). The most common implementation of BPE is probably the GPT2 one, but Google's sentencepiece implementation of BPE offers the not so slight advantage of natively being able to tokenize Unicode characters directly, without having to assemble them from bytes, which requires additional tokens representing partial Unicode code points to be added to the vocabulary, wasting some additional space. For example, “🙂” consists of four bytes “`F0 9F 99 82`”, so in traditional BPE, `F0` would first get merged with `9F` to make up `F09F`, which is then merged with `99` to make up `F09F99`, which is then merged with `82`, so two additional intermediate tokens would have to be added to the vocabulary. At the same time, sentencepiece also supports tokenizing arbitrary binary data using byte tokens.
Finally, the compression ratio achieved by the tokenizer is important to consider. If a given text tokenizes into less tokens, this will allow the LLM to see more text at once, given the fixed size of context it can see at a maximum, which is important for users of the LLm. It will also influence how much text you need to achieve a certain amount of tokens if, say, you are trying to meet a certain amount of training data. If your tokenizer compresses text less efficiently, you may more easily achieve a dataset of a given size, but it stands to reason that a model trained on such a less efficiently tokenized dataset of a given size will learn less than one trained of on a same sized dataset that was tokenized with a tokenizer that achieves a higher compression ratio, because in effect, it will see less bits of actually information during training.
With all these things in mind, we decided that we want our own tokenizer for the models we will train, that is better optimized for our use cases, such as storytelling.
Tokenizers are trained on data, so we started by extracting small randomized subsets from the various distinct subsets of our model training dataset and used these to evaluate the available tokenizer training approaches. Both Huggingface's tokenizers library and Google's sentencepiece support training tokenizers of different types. A preliminary investigation showed that sentencepiece's trainer is more memory efficient, although a training dataset in the low double digit gibibytes still required a compute node with 1TB of RAM to run successfully. Due to this, we decided to use sentencepiece.
We originally decided on a vocabulary size of 32000, but when training Genji V2, we found that modifying an existing tokenizer to support an additional language was not a pleasant experience. As it seems likely that we will want to do similar [language transfer learning](https://blog.novelai.net/data-efficient-language-transfer-with-gpt-j-45daedaaf35a) in the future, we have decided to have our tokenizer accommodate both English and Japanese from the start. For this reason, we decided to double the vocabulary size to 64000, which then was close to filling up the available token ID space of 16 bits, so we went all the way to a vocabulary size of 65535 tokens. During tokenizer training, I carefully balanced the training data in such a way that latin alphabet tokens of a length of at least 2 characters and Japanese language tokens take up approximately the same amount of token space. Bumping the vocabulary size up to 65535 also allows more Unicode character tokens such as emoji. For the Japanese part of tokenizer training data, we used our existing Genji training data and a comparatively smaller amount of Japanese Wikipedia.
We have manually added tokens for certain multi-whitespace strings and have set up the tokenizer in such a way that numbers are tokenized digit by digit. Tokenizing numbers digit by digit may slightly reduce compression ratio in number heavy texts, but it will also allow the LLM to more effectively learn how to handle numeric values.
Considering the possible benefits of Unigram tokenizers, we started out by training a Unigram tokenizer. This took multiple runs of rebalancing the dataset between languages and also between the different subsets of our main datasets to get the token distribution to look the way we want. Each Unigram training run took a few hours. For the sake of comparison, we also trained a BPE model, which again required multiple runs to rebalance the dataset. BPE runs ended up much slower, taking nearly a whole day.
Both tokenizers were then evaluated on a held-out part of the dataset. The idea was that, if the compression ratios are similar or Unigram is only slightly worse, we would use the Unigram tokenizer to benefit from the more natural word segmentation. We found that the BPE tokenizer has a 25-29% higher compression ratio on the largest parts of our English language dataset. This unexpectedly large gap in performance led us to choose the BPE tokenizer over the Unigram one and also explains the continuing prevalence of BPE tokenizers for LLMs. We also compared the compression ratio of our tokenizer to the LLaMa tokenizer, which is a sentencepiece based BPE tokenizer with a 32000 token vocabulary. In comparison to the LLaMa tokenizer, we find our tokenizer to achieve a 7-19% higher compression ratio on the largest parts of our English language dataset.
Finally, I would like to give some stats about token distribution. Our tokenizer contains 28586 tokens made up of latin alphabet characters with a minimum length of two. Tokens with a leading space are included in this. It contains 18123 Japanese tokens longer than a single character and 9626 tokens for Japanese and Chinese characters, which cannot be easily told apart for the sake of these stats due to the Unicode han unification. 9200 other tokens are included. This space is taken up mostly by Unicode characters such as emoji.
For comparison, the LLaMa tokenizer contains 23964 tokens made up only of latin alphabet characters, no Japanese token longer than a single character, 836 Japanese characters and 7224 other tokens.
## JavaScript implementation
The JavaScript implementation used by the NovelAI frontend can be found [here](https://github.com/NovelAI/nai-js-tokenizer).
## V2
For [V2](https://huggingface.co/NovelAI/nerdstash-tokenizer-v2/), the original digit special tokens were replaced with english contractions. Digits will therefore be encoded using corresponding the byte tokens instead.
# Example usage with transformers
Since it seems to be the most up-to-date class for using sentencepiece tokenizers in transformers, this tokenizer uses the `LlamaTokenizer` class. Note that the `LlamaTokenizerFast` class is not supported. `AutoTokenizer` selects the fast version and is also not supported.
```python
from transformers import LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("NovelAI/nerdstash-tokenizer-v1")
print(tokenizer.encode("Hello, world!"))
```
## Example usage with sentencepiece
```python
import sentencepiece as spm
s = spm.SentencePieceProcessor(model_file='tokenizer.model')
text = "The quick brown fox jumps over the goblin."
print("Text:", text)
print("Token IDs:", s.encode(text))
# Token IDs: [541, 1939, 6573, 22820, 22734, 712, 336, 34477, 49230]
print("Readable tokens:", s.encode(text, out_type=str))
# Readable tokens: ['The', '▁quick', '▁brown', '▁fox', '▁jumps', '▁over', '▁the', '▁goblin', '.']
```
## License
The tokenizer is licensed under the GNU General Public License, version 2.
|
NovelAI/nerdstash-tokenizer-v2
|
NovelAI
| 2023-08-02T23:11:10Z | 0 | 7 | null |
[
"tokenizer",
"novelai",
"sentencepiece",
"en",
"ja",
"license:gpl-2.0",
"region:us"
] | null | 2023-08-02T23:03:57Z |
---
license: gpl-2.0
language:
- en
- ja
tags:
- tokenizer
- novelai
- sentencepiece
---
# Tokenizer
Finetune here to talk a bit about [NovelAI](https://novelai.net/)'s new tokenizer that I worked on. First a quick reminder. In most cases, our models don't see words as individual letters. Instead, text is broken down into tokens, which are words or word fragments. For example, the sentence “`The quick brown fox jumps over the goblin.`” would tokenize as “`The| quick| brown| fox| jumps| over| the| go|bl|in.`” in the Pile tokenizer used by GPT-NeoX 20B and Krake, with each | signifying a boundary between tokens.
When deciding on a tokenizer for a model, there are various criteria to consider. The first and most obvious is the vocabulary size. It may be tempting to just set it very high to ensure that every word or even multiple words, as in the case of the tokenizer used by AI21's Jurassic models, gets its own distinct token. However, this has the drawback that the model will be less able to generalize. That means it will not be able to make use of meaningful patterns in how words are spelt, such as similarities between words ending in “-ize”. It will also be less robust against misspellings. At the same time, the vocabulary of the tokenizer should not be too small. Common words should have their own token. The same goes for Unicode characters that are likely to show up in tokenized texts, because otherwise they will have to be constructed by the model byte-by-byte, which is much harder for the model. A good trade-off with regards to vocabulary size is around 32000 tokens for a single language vocabulary. This also has the benefit of fitting easily within 16 bits, which makes handling tokenized data easier in many cases.
The type of tokenizer is another important decision to make. Unigram tokenizers have been shown to produce much more meaningful tokenizations of words, while so far the predominantly used tokenizer type for large language models (LLM) is BPE (byte pair encoding). The most common implementation of BPE is probably the GPT2 one, but Google's sentencepiece implementation of BPE offers the not so slight advantage of natively being able to tokenize Unicode characters directly, without having to assemble them from bytes, which requires additional tokens representing partial Unicode code points to be added to the vocabulary, wasting some additional space. For example, “🙂” consists of four bytes “`F0 9F 99 82`”, so in traditional BPE, `F0` would first get merged with `9F` to make up `F09F`, which is then merged with `99` to make up `F09F99`, which is then merged with `82`, so two additional intermediate tokens would have to be added to the vocabulary. At the same time, sentencepiece also supports tokenizing arbitrary binary data using byte tokens.
Finally, the compression ratio achieved by the tokenizer is important to consider. If a given text tokenizes into less tokens, this will allow the LLM to see more text at once, given the fixed size of context it can see at a maximum, which is important for users of the LLm. It will also influence how much text you need to achieve a certain amount of tokens if, say, you are trying to meet a certain amount of training data. If your tokenizer compresses text less efficiently, you may more easily achieve a dataset of a given size, but it stands to reason that a model trained on such a less efficiently tokenized dataset of a given size will learn less than one trained of on a same sized dataset that was tokenized with a tokenizer that achieves a higher compression ratio, because in effect, it will see less bits of actually information during training.
With all these things in mind, we decided that we want our own tokenizer for the models we will train, that is better optimized for our use cases, such as storytelling.
Tokenizers are trained on data, so we started by extracting small randomized subsets from the various distinct subsets of our model training dataset and used these to evaluate the available tokenizer training approaches. Both Huggingface's tokenizers library and Google's sentencepiece support training tokenizers of different types. A preliminary investigation showed that sentencepiece's trainer is more memory efficient, although a training dataset in the low double digit gibibytes still required a compute node with 1TB of RAM to run successfully. Due to this, we decided to use sentencepiece.
We originally decided on a vocabulary size of 32000, but when training Genji V2, we found that modifying an existing tokenizer to support an additional language was not a pleasant experience. As it seems likely that we will want to do similar [language transfer learning](https://blog.novelai.net/data-efficient-language-transfer-with-gpt-j-45daedaaf35a) in the future, we have decided to have our tokenizer accommodate both English and Japanese from the start. For this reason, we decided to double the vocabulary size to 64000, which then was close to filling up the available token ID space of 16 bits, so we went all the way to a vocabulary size of 65535 tokens. During tokenizer training, I carefully balanced the training data in such a way that latin alphabet tokens of a length of at least 2 characters and Japanese language tokens take up approximately the same amount of token space. Bumping the vocabulary size up to 65535 also allows more Unicode character tokens such as emoji. For the Japanese part of tokenizer training data, we used our existing Genji training data and a comparatively smaller amount of Japanese Wikipedia.
We have manually added tokens for certain multi-whitespace strings and have set up the tokenizer in such a way that numbers are tokenized digit by digit. Tokenizing numbers digit by digit may slightly reduce compression ratio in number heavy texts, but it will also allow the LLM to more effectively learn how to handle numeric values.
Considering the possible benefits of Unigram tokenizers, we started out by training a Unigram tokenizer. This took multiple runs of rebalancing the dataset between languages and also between the different subsets of our main datasets to get the token distribution to look the way we want. Each Unigram training run took a few hours. For the sake of comparison, we also trained a BPE model, which again required multiple runs to rebalance the dataset. BPE runs ended up much slower, taking nearly a whole day.
Both tokenizers were then evaluated on a held-out part of the dataset. The idea was that, if the compression ratios are similar or Unigram is only slightly worse, we would use the Unigram tokenizer to benefit from the more natural word segmentation. We found that the BPE tokenizer has a 25-29% higher compression ratio on the largest parts of our English language dataset. This unexpectedly large gap in performance led us to choose the BPE tokenizer over the Unigram one and also explains the continuing prevalence of BPE tokenizers for LLMs. We also compared the compression ratio of our tokenizer to the LLaMa tokenizer, which is a sentencepiece based BPE tokenizer with a 32000 token vocabulary. In comparison to the LLaMa tokenizer, we find our tokenizer to achieve a 7-19% higher compression ratio on the largest parts of our English language dataset.
Finally, I would like to give some stats about token distribution. Our tokenizer contains 28586 tokens made up of latin alphabet characters with a minimum length of two. Tokens with a leading space are included in this. It contains 18123 Japanese tokens longer than a single character and 9626 tokens for Japanese and Chinese characters, which cannot be easily told apart for the sake of these stats due to the Unicode han unification. 9200 other tokens are included. This space is taken up mostly by Unicode characters such as emoji.
For comparison, the LLaMa tokenizer contains 23964 tokens made up only of latin alphabet characters, no Japanese token longer than a single character, 836 Japanese characters and 7224 other tokens.
## JavaScript implementation
The JavaScript implementation used by the NovelAI frontend can be found [here](https://github.com/NovelAI/nai-js-tokenizer).
## V2
For [V2](https://huggingface.co/NovelAI/nerdstash-tokenizer-v2/), the original digit special tokens were replaced with english contractions. Digits will therefore be encoded using corresponding the byte tokens instead.
# Example usage with transformers
Since it seems to be the most up-to-date class for using sentencepiece tokenizers in transformers, this tokenizer uses the `LlamaTokenizer` class. Note that the `LlamaTokenizerFast` class is not supported. `AutoTokenizer` selects the fast version and is also not supported.
```python
from transformers import LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("NovelAI/nerdstash-tokenizer-v2")
print(tokenizer.encode("Hello, world!"))
```
## Example usage with sentencepiece
```python
import sentencepiece as spm
s = spm.SentencePieceProcessor(model_file='tokenizer.model')
text = "The quick brown fox jumps over the goblin."
print("Text:", text)
print("Token IDs:", s.encode(text))
# Token IDs: [541, 1939, 6573, 22820, 22734, 712, 336, 34477, 49230]
print("Readable tokens:", s.encode(text, out_type=str))
# Readable tokens: ['The', '▁quick', '▁brown', '▁fox', '▁jumps', '▁over', '▁the', '▁goblin', '.']
```
## License
The tokenizer is licensed under the GNU General Public License, version 2.
|
asandhir/Amrit_billsum_model2
|
asandhir
| 2023-08-02T22:39:31Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-02T22:26:19Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: Amrit_billsum_model2
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1912
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Amrit_billsum_model2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3921
- Rouge1: 0.1912
- Rouge2: 0.0871
- Rougel: 0.1597
- Rougelsum: 0.1598
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.4589 | 0.1558 | 0.0555 | 0.1294 | 0.1295 | 19.0 |
| No log | 2.0 | 124 | 2.4180 | 0.1849 | 0.0805 | 0.1539 | 0.1541 | 19.0 |
| No log | 3.0 | 186 | 2.3985 | 0.1903 | 0.0855 | 0.1583 | 0.1585 | 19.0 |
| No log | 4.0 | 248 | 2.3921 | 0.1912 | 0.0871 | 0.1597 | 0.1598 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
NasimB/bnc_spoken-cbt-notm-log-rarity-seed
|
NasimB
| 2023-08-02T22:34:53Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-02T20:12:22Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bnc_spoken-cbt-notm-log-rarity-seed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bnc_spoken-cbt-notm-log-rarity-seed
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3631 | 0.29 | 500 | 5.3434 |
| 5.0481 | 0.59 | 1000 | 4.9361 |
| 4.7177 | 0.88 | 1500 | 4.7045 |
| 4.4593 | 1.17 | 2000 | 4.5683 |
| 4.3118 | 1.46 | 2500 | 4.4484 |
| 4.2082 | 1.76 | 3000 | 4.3453 |
| 4.0886 | 2.05 | 3500 | 4.2725 |
| 3.9027 | 2.34 | 4000 | 4.2271 |
| 3.8826 | 2.63 | 4500 | 4.1747 |
| 3.8396 | 2.93 | 5000 | 4.1241 |
| 3.645 | 3.22 | 5500 | 4.1184 |
| 3.5953 | 3.51 | 6000 | 4.0930 |
| 3.5812 | 3.81 | 6500 | 4.0604 |
| 3.4807 | 4.1 | 7000 | 4.0629 |
| 3.3286 | 4.39 | 7500 | 4.0557 |
| 3.325 | 4.68 | 8000 | 4.0463 |
| 3.3134 | 4.98 | 8500 | 4.0327 |
| 3.1583 | 5.27 | 9000 | 4.0488 |
| 3.1455 | 5.56 | 9500 | 4.0474 |
| 3.1459 | 5.85 | 10000 | 4.0466 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
avidoavid/RWKV-7b-finetuned
|
avidoavid
| 2023-08-02T22:26:54Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:RWKV/rwkv-raven-7b",
"base_model:finetune:RWKV/rwkv-raven-7b",
"region:us"
] | null | 2023-07-28T03:11:38Z |
---
base_model: RWKV/rwkv-raven-7b
tags:
- generated_from_trainer
model-index:
- name: RWKV-7b-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RWKV-7b-finetuned
This model is a fine-tuned version of [RWKV/rwkv-raven-7b](https://huggingface.co/RWKV/rwkv-raven-7b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1091 | 1.0 | 1 | 1.0925 |
| 1.1065 | 2.0 | 2 | 1.0765 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Isaacks/segformer-finetuned-ihc
|
Isaacks
| 2023-08-02T22:23:53Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"image-segmentation",
"vision",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-08-02T17:09:39Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- image-segmentation
- vision
- generated_from_trainer
model-index:
- name: segformer-finetuned-ihc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-finetuned-ihc
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the Isaacks/ihc_slide_tissue dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0326
- eval_mean_iou: 0.0
- eval_mean_accuracy: nan
- eval_overall_accuracy: nan
- eval_accuracy_background: nan
- eval_accuracy_tissue: nan
- eval_iou_background: 0.0
- eval_iou_tissue: 0.0
- eval_runtime: 19.1281
- eval_samples_per_second: 0.784
- eval_steps_per_second: 0.105
- epoch: 9.15
- step: 183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
Evan-Lin/Bart-cnn-Yelp-abs-attractive1-keywordmax1epoch0
|
Evan-Lin
| 2023-08-02T22:10:56Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-08-02T22:04:34Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmp_a8c3302/Evan-Lin/Bart-cnn-Yelp-abs-attractive1-keywordmax1epoch0")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmp_a8c3302/Evan-Lin/Bart-cnn-Yelp-abs-attractive1-keywordmax1epoch0")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmp_a8c3302/Evan-Lin/Bart-cnn-Yelp-abs-attractive1-keywordmax1epoch0")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
hs1710/ppo-Huggy
|
hs1710
| 2023-08-02T22:06:25Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-02T22:06:15Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hs1710/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sshalini6/small-5e4-r8-a32-d0.1-q-v-fc1-fc2
|
sshalini6
| 2023-08-02T21:50:47Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T21:50:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
simonycl/bert-base-uncased-sst-2-32-13
|
simonycl
| 2023-08-02T21:46:45Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T08:03:40Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-sst-2-32-13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sst-2-32-13
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5606
- Accuracy: 0.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6827 | 0.6875 |
| No log | 2.0 | 4 | 0.6826 | 0.6875 |
| No log | 3.0 | 6 | 0.6822 | 0.7031 |
| No log | 4.0 | 8 | 0.6818 | 0.6719 |
| 0.6948 | 5.0 | 10 | 0.6812 | 0.6719 |
| 0.6948 | 6.0 | 12 | 0.6805 | 0.6406 |
| 0.6948 | 7.0 | 14 | 0.6797 | 0.6406 |
| 0.6948 | 8.0 | 16 | 0.6789 | 0.6406 |
| 0.6948 | 9.0 | 18 | 0.6779 | 0.6562 |
| 0.6864 | 10.0 | 20 | 0.6768 | 0.6562 |
| 0.6864 | 11.0 | 22 | 0.6755 | 0.6562 |
| 0.6864 | 12.0 | 24 | 0.6741 | 0.6875 |
| 0.6864 | 13.0 | 26 | 0.6726 | 0.6719 |
| 0.6864 | 14.0 | 28 | 0.6710 | 0.6719 |
| 0.6517 | 15.0 | 30 | 0.6694 | 0.7031 |
| 0.6517 | 16.0 | 32 | 0.6676 | 0.6875 |
| 0.6517 | 17.0 | 34 | 0.6657 | 0.6719 |
| 0.6517 | 18.0 | 36 | 0.6643 | 0.625 |
| 0.6517 | 19.0 | 38 | 0.6636 | 0.6094 |
| 0.6027 | 20.0 | 40 | 0.6642 | 0.5938 |
| 0.6027 | 21.0 | 42 | 0.6632 | 0.5781 |
| 0.6027 | 22.0 | 44 | 0.6607 | 0.5781 |
| 0.6027 | 23.0 | 46 | 0.6582 | 0.6094 |
| 0.6027 | 24.0 | 48 | 0.6562 | 0.6406 |
| 0.4998 | 25.0 | 50 | 0.6546 | 0.6094 |
| 0.4998 | 26.0 | 52 | 0.6503 | 0.5938 |
| 0.4998 | 27.0 | 54 | 0.6450 | 0.6094 |
| 0.4998 | 28.0 | 56 | 0.6395 | 0.6094 |
| 0.4998 | 29.0 | 58 | 0.6362 | 0.5938 |
| 0.3593 | 30.0 | 60 | 0.6380 | 0.5938 |
| 0.3593 | 31.0 | 62 | 0.6361 | 0.5938 |
| 0.3593 | 32.0 | 64 | 0.6348 | 0.5938 |
| 0.3593 | 33.0 | 66 | 0.6327 | 0.625 |
| 0.3593 | 34.0 | 68 | 0.6301 | 0.6094 |
| 0.2483 | 35.0 | 70 | 0.6347 | 0.6094 |
| 0.2483 | 36.0 | 72 | 0.6401 | 0.5938 |
| 0.2483 | 37.0 | 74 | 0.6468 | 0.5781 |
| 0.2483 | 38.0 | 76 | 0.6533 | 0.5781 |
| 0.2483 | 39.0 | 78 | 0.6600 | 0.5938 |
| 0.1735 | 40.0 | 80 | 0.6621 | 0.5938 |
| 0.1735 | 41.0 | 82 | 0.6652 | 0.5938 |
| 0.1735 | 42.0 | 84 | 0.6745 | 0.6094 |
| 0.1735 | 43.0 | 86 | 0.6849 | 0.6094 |
| 0.1735 | 44.0 | 88 | 0.6956 | 0.5938 |
| 0.111 | 45.0 | 90 | 0.7087 | 0.5938 |
| 0.111 | 46.0 | 92 | 0.7238 | 0.5938 |
| 0.111 | 47.0 | 94 | 0.7376 | 0.5938 |
| 0.111 | 48.0 | 96 | 0.7506 | 0.5938 |
| 0.111 | 49.0 | 98 | 0.7646 | 0.6094 |
| 0.0691 | 50.0 | 100 | 0.7817 | 0.6094 |
| 0.0691 | 51.0 | 102 | 0.8015 | 0.625 |
| 0.0691 | 52.0 | 104 | 0.8277 | 0.625 |
| 0.0691 | 53.0 | 106 | 0.8582 | 0.625 |
| 0.0691 | 54.0 | 108 | 0.8849 | 0.625 |
| 0.0395 | 55.0 | 110 | 0.9094 | 0.625 |
| 0.0395 | 56.0 | 112 | 0.9309 | 0.625 |
| 0.0395 | 57.0 | 114 | 0.9525 | 0.625 |
| 0.0395 | 58.0 | 116 | 0.9740 | 0.6094 |
| 0.0395 | 59.0 | 118 | 0.9959 | 0.6094 |
| 0.0213 | 60.0 | 120 | 1.0209 | 0.6094 |
| 0.0213 | 61.0 | 122 | 1.0452 | 0.625 |
| 0.0213 | 62.0 | 124 | 1.0680 | 0.625 |
| 0.0213 | 63.0 | 126 | 1.0908 | 0.625 |
| 0.0213 | 64.0 | 128 | 1.1149 | 0.6094 |
| 0.0129 | 65.0 | 130 | 1.1381 | 0.625 |
| 0.0129 | 66.0 | 132 | 1.1590 | 0.625 |
| 0.0129 | 67.0 | 134 | 1.1787 | 0.625 |
| 0.0129 | 68.0 | 136 | 1.1960 | 0.625 |
| 0.0129 | 69.0 | 138 | 1.2125 | 0.625 |
| 0.0093 | 70.0 | 140 | 1.2267 | 0.625 |
| 0.0093 | 71.0 | 142 | 1.2399 | 0.625 |
| 0.0093 | 72.0 | 144 | 1.2516 | 0.625 |
| 0.0093 | 73.0 | 146 | 1.2626 | 0.625 |
| 0.0093 | 74.0 | 148 | 1.2726 | 0.6406 |
| 0.0071 | 75.0 | 150 | 1.2825 | 0.6406 |
| 0.0071 | 76.0 | 152 | 1.2921 | 0.625 |
| 0.0071 | 77.0 | 154 | 1.3016 | 0.625 |
| 0.0071 | 78.0 | 156 | 1.3104 | 0.625 |
| 0.0071 | 79.0 | 158 | 1.3177 | 0.625 |
| 0.0059 | 80.0 | 160 | 1.3243 | 0.625 |
| 0.0059 | 81.0 | 162 | 1.3311 | 0.625 |
| 0.0059 | 82.0 | 164 | 1.3377 | 0.625 |
| 0.0059 | 83.0 | 166 | 1.3446 | 0.625 |
| 0.0059 | 84.0 | 168 | 1.3519 | 0.625 |
| 0.0051 | 85.0 | 170 | 1.3590 | 0.625 |
| 0.0051 | 86.0 | 172 | 1.3662 | 0.625 |
| 0.0051 | 87.0 | 174 | 1.3731 | 0.625 |
| 0.0051 | 88.0 | 176 | 1.3801 | 0.625 |
| 0.0051 | 89.0 | 178 | 1.3867 | 0.625 |
| 0.0045 | 90.0 | 180 | 1.3929 | 0.625 |
| 0.0045 | 91.0 | 182 | 1.3988 | 0.625 |
| 0.0045 | 92.0 | 184 | 1.4048 | 0.625 |
| 0.0045 | 93.0 | 186 | 1.4110 | 0.625 |
| 0.0045 | 94.0 | 188 | 1.4171 | 0.625 |
| 0.0042 | 95.0 | 190 | 1.4231 | 0.625 |
| 0.0042 | 96.0 | 192 | 1.4290 | 0.625 |
| 0.0042 | 97.0 | 194 | 1.4346 | 0.625 |
| 0.0042 | 98.0 | 196 | 1.4401 | 0.625 |
| 0.0042 | 99.0 | 198 | 1.4454 | 0.625 |
| 0.0037 | 100.0 | 200 | 1.4506 | 0.625 |
| 0.0037 | 101.0 | 202 | 1.4555 | 0.625 |
| 0.0037 | 102.0 | 204 | 1.4604 | 0.625 |
| 0.0037 | 103.0 | 206 | 1.4650 | 0.625 |
| 0.0037 | 104.0 | 208 | 1.4690 | 0.625 |
| 0.0034 | 105.0 | 210 | 1.4728 | 0.625 |
| 0.0034 | 106.0 | 212 | 1.4765 | 0.625 |
| 0.0034 | 107.0 | 214 | 1.4802 | 0.625 |
| 0.0034 | 108.0 | 216 | 1.4836 | 0.625 |
| 0.0034 | 109.0 | 218 | 1.4870 | 0.625 |
| 0.0033 | 110.0 | 220 | 1.4903 | 0.625 |
| 0.0033 | 111.0 | 222 | 1.4936 | 0.625 |
| 0.0033 | 112.0 | 224 | 1.4969 | 0.625 |
| 0.0033 | 113.0 | 226 | 1.5002 | 0.625 |
| 0.0033 | 114.0 | 228 | 1.5036 | 0.625 |
| 0.0031 | 115.0 | 230 | 1.5069 | 0.625 |
| 0.0031 | 116.0 | 232 | 1.5100 | 0.625 |
| 0.0031 | 117.0 | 234 | 1.5130 | 0.625 |
| 0.0031 | 118.0 | 236 | 1.5161 | 0.625 |
| 0.0031 | 119.0 | 238 | 1.5190 | 0.625 |
| 0.003 | 120.0 | 240 | 1.5216 | 0.625 |
| 0.003 | 121.0 | 242 | 1.5242 | 0.625 |
| 0.003 | 122.0 | 244 | 1.5269 | 0.625 |
| 0.003 | 123.0 | 246 | 1.5295 | 0.625 |
| 0.003 | 124.0 | 248 | 1.5321 | 0.625 |
| 0.0028 | 125.0 | 250 | 1.5345 | 0.625 |
| 0.0028 | 126.0 | 252 | 1.5367 | 0.625 |
| 0.0028 | 127.0 | 254 | 1.5386 | 0.625 |
| 0.0028 | 128.0 | 256 | 1.5405 | 0.625 |
| 0.0028 | 129.0 | 258 | 1.5422 | 0.625 |
| 0.0027 | 130.0 | 260 | 1.5438 | 0.625 |
| 0.0027 | 131.0 | 262 | 1.5453 | 0.625 |
| 0.0027 | 132.0 | 264 | 1.5468 | 0.625 |
| 0.0027 | 133.0 | 266 | 1.5482 | 0.625 |
| 0.0027 | 134.0 | 268 | 1.5495 | 0.625 |
| 0.0027 | 135.0 | 270 | 1.5507 | 0.625 |
| 0.0027 | 136.0 | 272 | 1.5518 | 0.625 |
| 0.0027 | 137.0 | 274 | 1.5529 | 0.625 |
| 0.0027 | 138.0 | 276 | 1.5539 | 0.625 |
| 0.0027 | 139.0 | 278 | 1.5549 | 0.625 |
| 0.0026 | 140.0 | 280 | 1.5557 | 0.625 |
| 0.0026 | 141.0 | 282 | 1.5565 | 0.625 |
| 0.0026 | 142.0 | 284 | 1.5573 | 0.625 |
| 0.0026 | 143.0 | 286 | 1.5580 | 0.625 |
| 0.0026 | 144.0 | 288 | 1.5587 | 0.625 |
| 0.0025 | 145.0 | 290 | 1.5593 | 0.625 |
| 0.0025 | 146.0 | 292 | 1.5597 | 0.625 |
| 0.0025 | 147.0 | 294 | 1.5601 | 0.625 |
| 0.0025 | 148.0 | 296 | 1.5603 | 0.625 |
| 0.0025 | 149.0 | 298 | 1.5605 | 0.625 |
| 0.0026 | 150.0 | 300 | 1.5606 | 0.625 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
Serjssv/speecht5_finetuned_voxpopuli_it-v_1
|
Serjssv
| 2023-08-02T21:44:32Z | 97 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"text-to-speech",
"it",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-02T20:58:12Z |
---
license: mit
base_model: Serjssv/speecht5_finetuned_voxpopuli_nl-v_1
tags:
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_it-v_1
results: []
language:
- it
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_it-v_1
This model is a fine-tuned version of [Serjssv/speecht5_finetuned_voxpopuli_nl-v_1](https://huggingface.co/Serjssv/speecht5_finetuned_voxpopuli_nl-v_1) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5249 | 10.58 | 1000 | 0.4967 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
josephamess/llama-2-7bn-MultiChoiceFineTuned-v2
|
josephamess
| 2023-08-02T21:29:09Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T17:02:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
venkatasg/lil-bevo
|
venkatasg
| 2023-08-02T21:28:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"fill-mask",
"babylm",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-07-19T20:51:21Z |
---
license: mit
language:
- en
tags:
- babylm
---
# Lil-Bevo
Lil-Bevo is UT Austin's submission to the BabyLM challenge, specifically the *strict-small* track.
[Link to GitHub Repo](https://github.com/venkatasg/Lil-Bevo)
## TLDR:
- Unigram tokenizer trained on 10M BabyLM tokens plus MAESTRO dataset for a vocab size of 16k.
- `deberta-small-v3` trained on mixture of MAESTRO and 10M tokens for 5 epochs.
- Model continues training for 50 epochs on 10M tokens with sequence length of 128.
- Model is trained for 2 epochs with targeted linguistic masking with sequence length of 512.
This README will be updated with more details soon.
|
ailabturkiye/firatsobutay
|
ailabturkiye
| 2023-08-02T21:19:05Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-08-02T21:08:25Z |
---
license: openrail
language:
- tr
tags:
- music
---
Fırat Sobutay'ın videolarından topladığım sesleriyle yaptığım ses modeli.
|
stillerman/this-rug-does-not-exist
|
stillerman
| 2023-08-02T21:17:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-02T20:32:03Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - stillerman/this-rug-does-not-exist
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the stillerman/rugs-1.9k-downloaded dataset. You can find some example images in the following.




|
shtif/Reinforce-CartPole
|
shtif
| 2023-08-02T21:17:27Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T21:17:18Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 483.10 +/- 33.80
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
chengyineng/bloom_prompt_tuning_1691010665.167542
|
chengyineng
| 2023-08-02T21:17:13Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T21:17:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
simonycl/roberta-large-sst-2-32-13
|
simonycl
| 2023-08-02T21:12:09Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T20:56:14Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-large-sst-2-32-13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-sst-2-32-13
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4497
- Accuracy: 0.9375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 0.6944 | 0.5 |
| No log | 2.0 | 4 | 0.6944 | 0.5 |
| No log | 3.0 | 6 | 0.6944 | 0.5 |
| No log | 4.0 | 8 | 0.6944 | 0.5 |
| 0.7018 | 5.0 | 10 | 0.6944 | 0.5 |
| 0.7018 | 6.0 | 12 | 0.6943 | 0.5 |
| 0.7018 | 7.0 | 14 | 0.6943 | 0.5 |
| 0.7018 | 8.0 | 16 | 0.6942 | 0.5 |
| 0.7018 | 9.0 | 18 | 0.6941 | 0.5 |
| 0.7003 | 10.0 | 20 | 0.6940 | 0.5 |
| 0.7003 | 11.0 | 22 | 0.6939 | 0.5 |
| 0.7003 | 12.0 | 24 | 0.6938 | 0.5 |
| 0.7003 | 13.0 | 26 | 0.6937 | 0.5 |
| 0.7003 | 14.0 | 28 | 0.6936 | 0.5 |
| 0.6964 | 15.0 | 30 | 0.6934 | 0.5 |
| 0.6964 | 16.0 | 32 | 0.6934 | 0.5 |
| 0.6964 | 17.0 | 34 | 0.6933 | 0.5 |
| 0.6964 | 18.0 | 36 | 0.6932 | 0.5 |
| 0.6964 | 19.0 | 38 | 0.6931 | 0.5 |
| 0.7001 | 20.0 | 40 | 0.6931 | 0.5 |
| 0.7001 | 21.0 | 42 | 0.6931 | 0.5 |
| 0.7001 | 22.0 | 44 | 0.6931 | 0.5 |
| 0.7001 | 23.0 | 46 | 0.6931 | 0.5 |
| 0.7001 | 24.0 | 48 | 0.6931 | 0.5 |
| 0.6924 | 25.0 | 50 | 0.6931 | 0.5 |
| 0.6924 | 26.0 | 52 | 0.6931 | 0.5 |
| 0.6924 | 27.0 | 54 | 0.6931 | 0.5 |
| 0.6924 | 28.0 | 56 | 0.6930 | 0.5 |
| 0.6924 | 29.0 | 58 | 0.6930 | 0.5 |
| 0.6985 | 30.0 | 60 | 0.6930 | 0.5 |
| 0.6985 | 31.0 | 62 | 0.6930 | 0.5 |
| 0.6985 | 32.0 | 64 | 0.6929 | 0.5 |
| 0.6985 | 33.0 | 66 | 0.6927 | 0.5 |
| 0.6985 | 34.0 | 68 | 0.6925 | 0.5 |
| 0.6968 | 35.0 | 70 | 0.6924 | 0.5 |
| 0.6968 | 36.0 | 72 | 0.6923 | 0.5 |
| 0.6968 | 37.0 | 74 | 0.6922 | 0.5 |
| 0.6968 | 38.0 | 76 | 0.6922 | 0.5 |
| 0.6968 | 39.0 | 78 | 0.6920 | 0.5 |
| 0.6822 | 40.0 | 80 | 0.6917 | 0.5 |
| 0.6822 | 41.0 | 82 | 0.6916 | 0.5 |
| 0.6822 | 42.0 | 84 | 0.6913 | 0.5 |
| 0.6822 | 43.0 | 86 | 0.6911 | 0.5 |
| 0.6822 | 44.0 | 88 | 0.6910 | 0.5 |
| 0.6907 | 45.0 | 90 | 0.6908 | 0.5 |
| 0.6907 | 46.0 | 92 | 0.6906 | 0.5 |
| 0.6907 | 47.0 | 94 | 0.6905 | 0.5 |
| 0.6907 | 48.0 | 96 | 0.6902 | 0.5156 |
| 0.6907 | 49.0 | 98 | 0.6898 | 0.5625 |
| 0.6822 | 50.0 | 100 | 0.6892 | 0.5469 |
| 0.6822 | 51.0 | 102 | 0.6887 | 0.5938 |
| 0.6822 | 52.0 | 104 | 0.6881 | 0.5938 |
| 0.6822 | 53.0 | 106 | 0.6874 | 0.6094 |
| 0.6822 | 54.0 | 108 | 0.6868 | 0.6094 |
| 0.6744 | 55.0 | 110 | 0.6862 | 0.5938 |
| 0.6744 | 56.0 | 112 | 0.6859 | 0.5312 |
| 0.6744 | 57.0 | 114 | 0.6856 | 0.5469 |
| 0.6744 | 58.0 | 116 | 0.6873 | 0.5469 |
| 0.6744 | 59.0 | 118 | 0.6910 | 0.5469 |
| 0.6401 | 60.0 | 120 | 0.6938 | 0.5469 |
| 0.6401 | 61.0 | 122 | 0.6911 | 0.5625 |
| 0.6401 | 62.0 | 124 | 0.6835 | 0.5625 |
| 0.6401 | 63.0 | 126 | 0.6765 | 0.5781 |
| 0.6401 | 64.0 | 128 | 0.6689 | 0.5781 |
| 0.5823 | 65.0 | 130 | 0.6597 | 0.6094 |
| 0.5823 | 66.0 | 132 | 0.6514 | 0.625 |
| 0.5823 | 67.0 | 134 | 0.6459 | 0.6406 |
| 0.5823 | 68.0 | 136 | 0.6372 | 0.6562 |
| 0.5823 | 69.0 | 138 | 0.6274 | 0.6562 |
| 0.5265 | 70.0 | 140 | 0.6163 | 0.6875 |
| 0.5265 | 71.0 | 142 | 0.6018 | 0.7188 |
| 0.5265 | 72.0 | 144 | 0.5853 | 0.7812 |
| 0.5265 | 73.0 | 146 | 0.5600 | 0.7812 |
| 0.5265 | 74.0 | 148 | 0.5138 | 0.8125 |
| 0.4305 | 75.0 | 150 | 0.4514 | 0.8594 |
| 0.4305 | 76.0 | 152 | 0.3753 | 0.9219 |
| 0.4305 | 77.0 | 154 | 0.3197 | 0.9375 |
| 0.4305 | 78.0 | 156 | 0.2687 | 0.9375 |
| 0.4305 | 79.0 | 158 | 0.2246 | 0.9531 |
| 0.2335 | 80.0 | 160 | 0.2019 | 0.9219 |
| 0.2335 | 81.0 | 162 | 0.1977 | 0.9219 |
| 0.2335 | 82.0 | 164 | 0.1741 | 0.9375 |
| 0.2335 | 83.0 | 166 | 0.1468 | 0.9375 |
| 0.2335 | 84.0 | 168 | 0.1355 | 0.9688 |
| 0.0918 | 85.0 | 170 | 0.1447 | 0.9688 |
| 0.0918 | 86.0 | 172 | 0.1628 | 0.9688 |
| 0.0918 | 87.0 | 174 | 0.2077 | 0.9531 |
| 0.0918 | 88.0 | 176 | 0.2623 | 0.9375 |
| 0.0918 | 89.0 | 178 | 0.2854 | 0.9375 |
| 0.0132 | 90.0 | 180 | 0.3076 | 0.9375 |
| 0.0132 | 91.0 | 182 | 0.2989 | 0.9375 |
| 0.0132 | 92.0 | 184 | 0.2839 | 0.9531 |
| 0.0132 | 93.0 | 186 | 0.2756 | 0.9531 |
| 0.0132 | 94.0 | 188 | 0.2669 | 0.9531 |
| 0.0035 | 95.0 | 190 | 0.2414 | 0.9531 |
| 0.0035 | 96.0 | 192 | 0.2353 | 0.9375 |
| 0.0035 | 97.0 | 194 | 0.2482 | 0.9531 |
| 0.0035 | 98.0 | 196 | 0.2578 | 0.9375 |
| 0.0035 | 99.0 | 198 | 0.2755 | 0.9375 |
| 0.0013 | 100.0 | 200 | 0.2956 | 0.9375 |
| 0.0013 | 101.0 | 202 | 0.3133 | 0.9531 |
| 0.0013 | 102.0 | 204 | 0.3293 | 0.9531 |
| 0.0013 | 103.0 | 206 | 0.3417 | 0.9531 |
| 0.0013 | 104.0 | 208 | 0.3510 | 0.9531 |
| 0.0005 | 105.0 | 210 | 0.3616 | 0.9531 |
| 0.0005 | 106.0 | 212 | 0.3694 | 0.9531 |
| 0.0005 | 107.0 | 214 | 0.3754 | 0.9531 |
| 0.0005 | 108.0 | 216 | 0.3806 | 0.9531 |
| 0.0005 | 109.0 | 218 | 0.3850 | 0.9531 |
| 0.0004 | 110.0 | 220 | 0.3890 | 0.9531 |
| 0.0004 | 111.0 | 222 | 0.3924 | 0.9531 |
| 0.0004 | 112.0 | 224 | 0.3956 | 0.9531 |
| 0.0004 | 113.0 | 226 | 0.3986 | 0.9531 |
| 0.0004 | 114.0 | 228 | 0.4011 | 0.9531 |
| 0.0003 | 115.0 | 230 | 0.4034 | 0.9531 |
| 0.0003 | 116.0 | 232 | 0.4056 | 0.9531 |
| 0.0003 | 117.0 | 234 | 0.4076 | 0.9531 |
| 0.0003 | 118.0 | 236 | 0.4118 | 0.9531 |
| 0.0003 | 119.0 | 238 | 0.4199 | 0.9531 |
| 0.0003 | 120.0 | 240 | 0.4298 | 0.9375 |
| 0.0003 | 121.0 | 242 | 0.4401 | 0.9375 |
| 0.0003 | 122.0 | 244 | 0.4495 | 0.9375 |
| 0.0003 | 123.0 | 246 | 0.4602 | 0.9375 |
| 0.0003 | 124.0 | 248 | 0.4687 | 0.9375 |
| 0.0003 | 125.0 | 250 | 0.4755 | 0.9375 |
| 0.0003 | 126.0 | 252 | 0.4813 | 0.9375 |
| 0.0003 | 127.0 | 254 | 0.4855 | 0.9375 |
| 0.0003 | 128.0 | 256 | 0.4896 | 0.9375 |
| 0.0003 | 129.0 | 258 | 0.4940 | 0.9375 |
| 0.0002 | 130.0 | 260 | 0.4967 | 0.9375 |
| 0.0002 | 131.0 | 262 | 0.4963 | 0.9375 |
| 0.0002 | 132.0 | 264 | 0.4903 | 0.9375 |
| 0.0002 | 133.0 | 266 | 0.4861 | 0.9375 |
| 0.0002 | 134.0 | 268 | 0.4831 | 0.9375 |
| 0.0003 | 135.0 | 270 | 0.4804 | 0.9375 |
| 0.0003 | 136.0 | 272 | 0.4780 | 0.9375 |
| 0.0003 | 137.0 | 274 | 0.4761 | 0.9375 |
| 0.0003 | 138.0 | 276 | 0.4721 | 0.9375 |
| 0.0003 | 139.0 | 278 | 0.4686 | 0.9375 |
| 0.0002 | 140.0 | 280 | 0.4646 | 0.9375 |
| 0.0002 | 141.0 | 282 | 0.4593 | 0.9375 |
| 0.0002 | 142.0 | 284 | 0.4542 | 0.9375 |
| 0.0002 | 143.0 | 286 | 0.4495 | 0.9375 |
| 0.0002 | 144.0 | 288 | 0.4472 | 0.9375 |
| 0.0002 | 145.0 | 290 | 0.4465 | 0.9375 |
| 0.0002 | 146.0 | 292 | 0.4467 | 0.9375 |
| 0.0002 | 147.0 | 294 | 0.4469 | 0.9375 |
| 0.0002 | 148.0 | 296 | 0.4474 | 0.9375 |
| 0.0002 | 149.0 | 298 | 0.4483 | 0.9375 |
| 0.0002 | 150.0 | 300 | 0.4497 | 0.9375 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
retrieval-bar/google_flan-t5-xl_mbe_hl_passage
|
retrieval-bar
| 2023-08-02T21:09:48Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T21:09:46Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
kiddoos/totoro
|
kiddoos
| 2023-08-02T20:59:38Z | 11 | 0 |
diffusers
|
[
"diffusers",
"LoRA",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"region:us"
] |
text-to-image
| 2023-07-23T23:53:11Z |
---
pipeline_tag: text-to-image
tags:
- LoRA
- stable-diffusion
- diffusers
- stable-diffusion-diffusers
---
# Totoro LoRA text2image fine-tuning
These are LoRA adaption weights for nitrosocke/Ghibli-Diffusion.
```python
pipe = StableDiffusionPipeline.from_pretrained(
model_id,
torch_dtype=torch.float16,
safety_checker=None,
)
pipe.enable_xformers_memory_efficient_attention()
pipe.load_lora_weights(./lora_weights)
pipe = pipe.to('cuda')
image = pipe(
prompt='Totoro',
num_inference_steps=66,
guidance_scale=6.9,
cross_attention_kwargs={'scale': 0.6 },
).images[0]
```










|
simonycl/roberta-large-sst-2-16-13
|
simonycl
| 2023-08-02T20:55:24Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T07:50:50Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-large-sst-2-16-13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-sst-2-16-13
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3222
- Accuracy: 0.8438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.7045 | 0.5 |
| No log | 2.0 | 2 | 0.7045 | 0.5 |
| No log | 3.0 | 3 | 0.7045 | 0.5 |
| No log | 4.0 | 4 | 0.7045 | 0.5 |
| No log | 5.0 | 5 | 0.7045 | 0.5 |
| No log | 6.0 | 6 | 0.7045 | 0.5 |
| No log | 7.0 | 7 | 0.7044 | 0.5 |
| No log | 8.0 | 8 | 0.7044 | 0.5 |
| No log | 9.0 | 9 | 0.7044 | 0.5 |
| 0.7125 | 10.0 | 10 | 0.7043 | 0.5 |
| 0.7125 | 11.0 | 11 | 0.7043 | 0.5 |
| 0.7125 | 12.0 | 12 | 0.7042 | 0.5 |
| 0.7125 | 13.0 | 13 | 0.7042 | 0.5 |
| 0.7125 | 14.0 | 14 | 0.7041 | 0.5 |
| 0.7125 | 15.0 | 15 | 0.7041 | 0.5 |
| 0.7125 | 16.0 | 16 | 0.7040 | 0.5 |
| 0.7125 | 17.0 | 17 | 0.7040 | 0.5 |
| 0.7125 | 18.0 | 18 | 0.7039 | 0.5 |
| 0.7125 | 19.0 | 19 | 0.7039 | 0.5 |
| 0.6935 | 20.0 | 20 | 0.7038 | 0.5 |
| 0.6935 | 21.0 | 21 | 0.7038 | 0.5 |
| 0.6935 | 22.0 | 22 | 0.7037 | 0.5 |
| 0.6935 | 23.0 | 23 | 0.7037 | 0.5 |
| 0.6935 | 24.0 | 24 | 0.7037 | 0.5 |
| 0.6935 | 25.0 | 25 | 0.7036 | 0.5 |
| 0.6935 | 26.0 | 26 | 0.7036 | 0.5 |
| 0.6935 | 27.0 | 27 | 0.7035 | 0.5 |
| 0.6935 | 28.0 | 28 | 0.7035 | 0.5 |
| 0.6935 | 29.0 | 29 | 0.7034 | 0.5 |
| 0.7031 | 30.0 | 30 | 0.7033 | 0.5 |
| 0.7031 | 31.0 | 31 | 0.7032 | 0.5 |
| 0.7031 | 32.0 | 32 | 0.7031 | 0.5 |
| 0.7031 | 33.0 | 33 | 0.7030 | 0.5 |
| 0.7031 | 34.0 | 34 | 0.7029 | 0.5 |
| 0.7031 | 35.0 | 35 | 0.7027 | 0.5 |
| 0.7031 | 36.0 | 36 | 0.7027 | 0.5 |
| 0.7031 | 37.0 | 37 | 0.7026 | 0.5 |
| 0.7031 | 38.0 | 38 | 0.7025 | 0.5 |
| 0.7031 | 39.0 | 39 | 0.7024 | 0.5 |
| 0.7021 | 40.0 | 40 | 0.7023 | 0.5 |
| 0.7021 | 41.0 | 41 | 0.7022 | 0.5 |
| 0.7021 | 42.0 | 42 | 0.7021 | 0.5 |
| 0.7021 | 43.0 | 43 | 0.7019 | 0.5 |
| 0.7021 | 44.0 | 44 | 0.7017 | 0.5 |
| 0.7021 | 45.0 | 45 | 0.7016 | 0.5 |
| 0.7021 | 46.0 | 46 | 0.7014 | 0.5 |
| 0.7021 | 47.0 | 47 | 0.7012 | 0.5 |
| 0.7021 | 48.0 | 48 | 0.7010 | 0.5 |
| 0.7021 | 49.0 | 49 | 0.7007 | 0.5 |
| 0.7009 | 50.0 | 50 | 0.7005 | 0.5 |
| 0.7009 | 51.0 | 51 | 0.7003 | 0.5 |
| 0.7009 | 52.0 | 52 | 0.7001 | 0.5 |
| 0.7009 | 53.0 | 53 | 0.6998 | 0.5 |
| 0.7009 | 54.0 | 54 | 0.6996 | 0.5 |
| 0.7009 | 55.0 | 55 | 0.6994 | 0.5 |
| 0.7009 | 56.0 | 56 | 0.6993 | 0.5 |
| 0.7009 | 57.0 | 57 | 0.6992 | 0.5 |
| 0.7009 | 58.0 | 58 | 0.6990 | 0.5 |
| 0.7009 | 59.0 | 59 | 0.6988 | 0.5 |
| 0.6866 | 60.0 | 60 | 0.6986 | 0.5 |
| 0.6866 | 61.0 | 61 | 0.6984 | 0.5 |
| 0.6866 | 62.0 | 62 | 0.6983 | 0.5 |
| 0.6866 | 63.0 | 63 | 0.6981 | 0.5 |
| 0.6866 | 64.0 | 64 | 0.6979 | 0.5 |
| 0.6866 | 65.0 | 65 | 0.6977 | 0.5 |
| 0.6866 | 66.0 | 66 | 0.6976 | 0.4688 |
| 0.6866 | 67.0 | 67 | 0.6974 | 0.4688 |
| 0.6866 | 68.0 | 68 | 0.6972 | 0.4688 |
| 0.6866 | 69.0 | 69 | 0.6970 | 0.4688 |
| 0.6818 | 70.0 | 70 | 0.6968 | 0.4688 |
| 0.6818 | 71.0 | 71 | 0.6966 | 0.4688 |
| 0.6818 | 72.0 | 72 | 0.6964 | 0.4688 |
| 0.6818 | 73.0 | 73 | 0.6961 | 0.4688 |
| 0.6818 | 74.0 | 74 | 0.6960 | 0.4688 |
| 0.6818 | 75.0 | 75 | 0.6959 | 0.4688 |
| 0.6818 | 76.0 | 76 | 0.6957 | 0.4688 |
| 0.6818 | 77.0 | 77 | 0.6955 | 0.4688 |
| 0.6818 | 78.0 | 78 | 0.6953 | 0.4688 |
| 0.6818 | 79.0 | 79 | 0.6948 | 0.4688 |
| 0.6639 | 80.0 | 80 | 0.6940 | 0.4688 |
| 0.6639 | 81.0 | 81 | 0.6932 | 0.4688 |
| 0.6639 | 82.0 | 82 | 0.6925 | 0.4688 |
| 0.6639 | 83.0 | 83 | 0.6916 | 0.4688 |
| 0.6639 | 84.0 | 84 | 0.6908 | 0.5 |
| 0.6639 | 85.0 | 85 | 0.6899 | 0.5 |
| 0.6639 | 86.0 | 86 | 0.6889 | 0.5 |
| 0.6639 | 87.0 | 87 | 0.6878 | 0.5 |
| 0.6639 | 88.0 | 88 | 0.6869 | 0.5 |
| 0.6639 | 89.0 | 89 | 0.6859 | 0.4688 |
| 0.6652 | 90.0 | 90 | 0.6850 | 0.4688 |
| 0.6652 | 91.0 | 91 | 0.6842 | 0.4688 |
| 0.6652 | 92.0 | 92 | 0.6836 | 0.5312 |
| 0.6652 | 93.0 | 93 | 0.6829 | 0.5312 |
| 0.6652 | 94.0 | 94 | 0.6818 | 0.5625 |
| 0.6652 | 95.0 | 95 | 0.6806 | 0.5938 |
| 0.6652 | 96.0 | 96 | 0.6792 | 0.5938 |
| 0.6652 | 97.0 | 97 | 0.6783 | 0.5938 |
| 0.6652 | 98.0 | 98 | 0.6771 | 0.5938 |
| 0.6652 | 99.0 | 99 | 0.6758 | 0.5938 |
| 0.621 | 100.0 | 100 | 0.6743 | 0.5938 |
| 0.621 | 101.0 | 101 | 0.6725 | 0.5938 |
| 0.621 | 102.0 | 102 | 0.6711 | 0.5938 |
| 0.621 | 103.0 | 103 | 0.6708 | 0.5938 |
| 0.621 | 104.0 | 104 | 0.6713 | 0.625 |
| 0.621 | 105.0 | 105 | 0.6693 | 0.5938 |
| 0.621 | 106.0 | 106 | 0.6605 | 0.5938 |
| 0.621 | 107.0 | 107 | 0.6499 | 0.5938 |
| 0.621 | 108.0 | 108 | 0.6439 | 0.5625 |
| 0.621 | 109.0 | 109 | 0.6434 | 0.625 |
| 0.5331 | 110.0 | 110 | 0.6439 | 0.5938 |
| 0.5331 | 111.0 | 111 | 0.6418 | 0.5625 |
| 0.5331 | 112.0 | 112 | 0.6388 | 0.5625 |
| 0.5331 | 113.0 | 113 | 0.6346 | 0.5625 |
| 0.5331 | 114.0 | 114 | 0.6307 | 0.5625 |
| 0.5331 | 115.0 | 115 | 0.6275 | 0.5625 |
| 0.5331 | 116.0 | 116 | 0.6230 | 0.5625 |
| 0.5331 | 117.0 | 117 | 0.6144 | 0.5625 |
| 0.5331 | 118.0 | 118 | 0.6092 | 0.5625 |
| 0.5331 | 119.0 | 119 | 0.6042 | 0.5938 |
| 0.4594 | 120.0 | 120 | 0.6006 | 0.5938 |
| 0.4594 | 121.0 | 121 | 0.5971 | 0.5938 |
| 0.4594 | 122.0 | 122 | 0.5906 | 0.5938 |
| 0.4594 | 123.0 | 123 | 0.5841 | 0.5938 |
| 0.4594 | 124.0 | 124 | 0.5759 | 0.6562 |
| 0.4594 | 125.0 | 125 | 0.5682 | 0.6875 |
| 0.4594 | 126.0 | 126 | 0.5566 | 0.6875 |
| 0.4594 | 127.0 | 127 | 0.5431 | 0.6875 |
| 0.4594 | 128.0 | 128 | 0.5314 | 0.6875 |
| 0.4594 | 129.0 | 129 | 0.5221 | 0.7188 |
| 0.33 | 130.0 | 130 | 0.5145 | 0.7188 |
| 0.33 | 131.0 | 131 | 0.5062 | 0.7188 |
| 0.33 | 132.0 | 132 | 0.4988 | 0.7188 |
| 0.33 | 133.0 | 133 | 0.4888 | 0.7188 |
| 0.33 | 134.0 | 134 | 0.4689 | 0.7188 |
| 0.33 | 135.0 | 135 | 0.4586 | 0.75 |
| 0.33 | 136.0 | 136 | 0.4464 | 0.7812 |
| 0.33 | 137.0 | 137 | 0.4330 | 0.7812 |
| 0.33 | 138.0 | 138 | 0.4185 | 0.7812 |
| 0.33 | 139.0 | 139 | 0.4004 | 0.8125 |
| 0.2099 | 140.0 | 140 | 0.3852 | 0.8125 |
| 0.2099 | 141.0 | 141 | 0.3724 | 0.8125 |
| 0.2099 | 142.0 | 142 | 0.3610 | 0.8125 |
| 0.2099 | 143.0 | 143 | 0.3613 | 0.8125 |
| 0.2099 | 144.0 | 144 | 0.3731 | 0.7812 |
| 0.2099 | 145.0 | 145 | 0.3655 | 0.8125 |
| 0.2099 | 146.0 | 146 | 0.3553 | 0.8125 |
| 0.2099 | 147.0 | 147 | 0.3457 | 0.8125 |
| 0.2099 | 148.0 | 148 | 0.3380 | 0.8438 |
| 0.2099 | 149.0 | 149 | 0.3315 | 0.8438 |
| 0.0894 | 150.0 | 150 | 0.3222 | 0.8438 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
MredK/batusv1
|
MredK
| 2023-08-02T20:49:47Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-08-02T20:45:02Z |
---
license: openrail
---
7 Dklık Dataset İle Yapıldı\
Train Bana Aittir\
331 Epoch\
Türkçe Model
|
shanover/medbot-godel-large
|
shanover
| 2023-08-02T20:43:31Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2206.11309",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-01T23:38:40Z |
---
license: mit
---
Important note: It is not properly trained, neither is usable. Still experimenting it.
Fine tuned: microsoft/GODEL-v1_1-large-seq2seq
Author: ArXiv paper: https://arxiv.org/abs/2206.11309
|
Tverous/sft-trl-claim-ppo2
|
Tverous
| 2023-08-02T20:38:20Z | 23 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-29T23:52:25Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
GesturingMan/Falcon_ZS
|
GesturingMan
| 2023-08-02T20:27:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T20:18:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
simonycl/roberta-base-sst-2-16-13
|
simonycl
| 2023-08-02T20:07:18Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T19:46:36Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-sst-2-16-13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-sst-2-16-13
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1304
- Accuracy: 0.9688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.6964 | 0.5 |
| No log | 2.0 | 2 | 0.6964 | 0.5 |
| No log | 3.0 | 3 | 0.6964 | 0.5 |
| No log | 4.0 | 4 | 0.6964 | 0.5 |
| No log | 5.0 | 5 | 0.6964 | 0.5 |
| No log | 6.0 | 6 | 0.6964 | 0.5 |
| No log | 7.0 | 7 | 0.6964 | 0.5 |
| No log | 8.0 | 8 | 0.6964 | 0.5 |
| No log | 9.0 | 9 | 0.6964 | 0.5 |
| 0.6977 | 10.0 | 10 | 0.6964 | 0.5 |
| 0.6977 | 11.0 | 11 | 0.6963 | 0.5 |
| 0.6977 | 12.0 | 12 | 0.6963 | 0.5 |
| 0.6977 | 13.0 | 13 | 0.6963 | 0.5 |
| 0.6977 | 14.0 | 14 | 0.6963 | 0.5 |
| 0.6977 | 15.0 | 15 | 0.6962 | 0.5 |
| 0.6977 | 16.0 | 16 | 0.6962 | 0.5 |
| 0.6977 | 17.0 | 17 | 0.6962 | 0.5 |
| 0.6977 | 18.0 | 18 | 0.6962 | 0.5 |
| 0.6977 | 19.0 | 19 | 0.6961 | 0.5 |
| 0.6939 | 20.0 | 20 | 0.6961 | 0.5 |
| 0.6939 | 21.0 | 21 | 0.6961 | 0.5 |
| 0.6939 | 22.0 | 22 | 0.6960 | 0.5 |
| 0.6939 | 23.0 | 23 | 0.6960 | 0.5 |
| 0.6939 | 24.0 | 24 | 0.6959 | 0.5 |
| 0.6939 | 25.0 | 25 | 0.6959 | 0.5 |
| 0.6939 | 26.0 | 26 | 0.6958 | 0.5 |
| 0.6939 | 27.0 | 27 | 0.6958 | 0.5 |
| 0.6939 | 28.0 | 28 | 0.6958 | 0.5 |
| 0.6939 | 29.0 | 29 | 0.6957 | 0.5 |
| 0.6972 | 30.0 | 30 | 0.6957 | 0.5 |
| 0.6972 | 31.0 | 31 | 0.6956 | 0.5 |
| 0.6972 | 32.0 | 32 | 0.6956 | 0.5 |
| 0.6972 | 33.0 | 33 | 0.6955 | 0.5 |
| 0.6972 | 34.0 | 34 | 0.6954 | 0.5 |
| 0.6972 | 35.0 | 35 | 0.6954 | 0.5 |
| 0.6972 | 36.0 | 36 | 0.6953 | 0.5 |
| 0.6972 | 37.0 | 37 | 0.6953 | 0.5 |
| 0.6972 | 38.0 | 38 | 0.6952 | 0.5 |
| 0.6972 | 39.0 | 39 | 0.6951 | 0.5 |
| 0.6931 | 40.0 | 40 | 0.6950 | 0.5 |
| 0.6931 | 41.0 | 41 | 0.6950 | 0.5 |
| 0.6931 | 42.0 | 42 | 0.6949 | 0.5 |
| 0.6931 | 43.0 | 43 | 0.6948 | 0.5 |
| 0.6931 | 44.0 | 44 | 0.6947 | 0.5 |
| 0.6931 | 45.0 | 45 | 0.6947 | 0.5 |
| 0.6931 | 46.0 | 46 | 0.6946 | 0.5 |
| 0.6931 | 47.0 | 47 | 0.6945 | 0.5 |
| 0.6931 | 48.0 | 48 | 0.6944 | 0.5 |
| 0.6931 | 49.0 | 49 | 0.6944 | 0.5 |
| 0.6938 | 50.0 | 50 | 0.6943 | 0.5 |
| 0.6938 | 51.0 | 51 | 0.6942 | 0.5 |
| 0.6938 | 52.0 | 52 | 0.6941 | 0.5 |
| 0.6938 | 53.0 | 53 | 0.6941 | 0.5 |
| 0.6938 | 54.0 | 54 | 0.6940 | 0.5 |
| 0.6938 | 55.0 | 55 | 0.6939 | 0.5 |
| 0.6938 | 56.0 | 56 | 0.6938 | 0.5 |
| 0.6938 | 57.0 | 57 | 0.6937 | 0.5 |
| 0.6938 | 58.0 | 58 | 0.6936 | 0.5 |
| 0.6938 | 59.0 | 59 | 0.6935 | 0.5 |
| 0.6914 | 60.0 | 60 | 0.6934 | 0.5 |
| 0.6914 | 61.0 | 61 | 0.6933 | 0.5 |
| 0.6914 | 62.0 | 62 | 0.6932 | 0.5 |
| 0.6914 | 63.0 | 63 | 0.6931 | 0.5 |
| 0.6914 | 64.0 | 64 | 0.6930 | 0.5 |
| 0.6914 | 65.0 | 65 | 0.6929 | 0.5 |
| 0.6914 | 66.0 | 66 | 0.6928 | 0.5 |
| 0.6914 | 67.0 | 67 | 0.6926 | 0.5 |
| 0.6914 | 68.0 | 68 | 0.6925 | 0.5 |
| 0.6914 | 69.0 | 69 | 0.6924 | 0.5 |
| 0.6842 | 70.0 | 70 | 0.6923 | 0.5 |
| 0.6842 | 71.0 | 71 | 0.6921 | 0.5 |
| 0.6842 | 72.0 | 72 | 0.6920 | 0.5 |
| 0.6842 | 73.0 | 73 | 0.6918 | 0.5 |
| 0.6842 | 74.0 | 74 | 0.6917 | 0.5 |
| 0.6842 | 75.0 | 75 | 0.6915 | 0.5 |
| 0.6842 | 76.0 | 76 | 0.6914 | 0.5 |
| 0.6842 | 77.0 | 77 | 0.6912 | 0.5 |
| 0.6842 | 78.0 | 78 | 0.6910 | 0.5 |
| 0.6842 | 79.0 | 79 | 0.6908 | 0.5 |
| 0.6817 | 80.0 | 80 | 0.6906 | 0.5 |
| 0.6817 | 81.0 | 81 | 0.6904 | 0.5 |
| 0.6817 | 82.0 | 82 | 0.6902 | 0.5 |
| 0.6817 | 83.0 | 83 | 0.6900 | 0.5 |
| 0.6817 | 84.0 | 84 | 0.6897 | 0.5 |
| 0.6817 | 85.0 | 85 | 0.6895 | 0.5 |
| 0.6817 | 86.0 | 86 | 0.6892 | 0.5 |
| 0.6817 | 87.0 | 87 | 0.6889 | 0.5 |
| 0.6817 | 88.0 | 88 | 0.6886 | 0.5 |
| 0.6817 | 89.0 | 89 | 0.6882 | 0.5 |
| 0.6684 | 90.0 | 90 | 0.6879 | 0.5 |
| 0.6684 | 91.0 | 91 | 0.6875 | 0.5 |
| 0.6684 | 92.0 | 92 | 0.6870 | 0.5 |
| 0.6684 | 93.0 | 93 | 0.6866 | 0.5312 |
| 0.6684 | 94.0 | 94 | 0.6861 | 0.5 |
| 0.6684 | 95.0 | 95 | 0.6856 | 0.5 |
| 0.6684 | 96.0 | 96 | 0.6850 | 0.5 |
| 0.6684 | 97.0 | 97 | 0.6843 | 0.5938 |
| 0.6684 | 98.0 | 98 | 0.6837 | 0.7188 |
| 0.6684 | 99.0 | 99 | 0.6829 | 0.75 |
| 0.6657 | 100.0 | 100 | 0.6821 | 0.75 |
| 0.6657 | 101.0 | 101 | 0.6812 | 0.7812 |
| 0.6657 | 102.0 | 102 | 0.6802 | 0.7812 |
| 0.6657 | 103.0 | 103 | 0.6791 | 0.7812 |
| 0.6657 | 104.0 | 104 | 0.6780 | 0.7812 |
| 0.6657 | 105.0 | 105 | 0.6767 | 0.7812 |
| 0.6657 | 106.0 | 106 | 0.6752 | 0.8125 |
| 0.6657 | 107.0 | 107 | 0.6736 | 0.75 |
| 0.6657 | 108.0 | 108 | 0.6717 | 0.75 |
| 0.6657 | 109.0 | 109 | 0.6696 | 0.75 |
| 0.6423 | 110.0 | 110 | 0.6671 | 0.75 |
| 0.6423 | 111.0 | 111 | 0.6642 | 0.7812 |
| 0.6423 | 112.0 | 112 | 0.6610 | 0.8125 |
| 0.6423 | 113.0 | 113 | 0.6572 | 0.8438 |
| 0.6423 | 114.0 | 114 | 0.6528 | 0.8125 |
| 0.6423 | 115.0 | 115 | 0.6477 | 0.8125 |
| 0.6423 | 116.0 | 116 | 0.6415 | 0.7812 |
| 0.6423 | 117.0 | 117 | 0.6342 | 0.7812 |
| 0.6423 | 118.0 | 118 | 0.6262 | 0.7812 |
| 0.6423 | 119.0 | 119 | 0.6180 | 0.7812 |
| 0.574 | 120.0 | 120 | 0.6090 | 0.7812 |
| 0.574 | 121.0 | 121 | 0.5987 | 0.7812 |
| 0.574 | 122.0 | 122 | 0.5867 | 0.7812 |
| 0.574 | 123.0 | 123 | 0.5732 | 0.7812 |
| 0.574 | 124.0 | 124 | 0.5579 | 0.7812 |
| 0.574 | 125.0 | 125 | 0.5410 | 0.8125 |
| 0.574 | 126.0 | 126 | 0.5226 | 0.9062 |
| 0.574 | 127.0 | 127 | 0.5031 | 0.9062 |
| 0.574 | 128.0 | 128 | 0.4823 | 0.9062 |
| 0.574 | 129.0 | 129 | 0.4605 | 0.9062 |
| 0.4243 | 130.0 | 130 | 0.4378 | 0.9375 |
| 0.4243 | 131.0 | 131 | 0.4148 | 0.9375 |
| 0.4243 | 132.0 | 132 | 0.3925 | 0.9375 |
| 0.4243 | 133.0 | 133 | 0.3714 | 0.9375 |
| 0.4243 | 134.0 | 134 | 0.3512 | 0.9688 |
| 0.4243 | 135.0 | 135 | 0.3324 | 0.9688 |
| 0.4243 | 136.0 | 136 | 0.3139 | 0.9688 |
| 0.4243 | 137.0 | 137 | 0.2955 | 0.9688 |
| 0.4243 | 138.0 | 138 | 0.2787 | 0.9375 |
| 0.4243 | 139.0 | 139 | 0.2633 | 0.9375 |
| 0.1979 | 140.0 | 140 | 0.2484 | 0.9688 |
| 0.1979 | 141.0 | 141 | 0.2332 | 0.9688 |
| 0.1979 | 142.0 | 142 | 0.2174 | 0.9688 |
| 0.1979 | 143.0 | 143 | 0.2015 | 0.9688 |
| 0.1979 | 144.0 | 144 | 0.1867 | 0.9688 |
| 0.1979 | 145.0 | 145 | 0.1734 | 0.9375 |
| 0.1979 | 146.0 | 146 | 0.1616 | 0.9375 |
| 0.1979 | 147.0 | 147 | 0.1511 | 0.9375 |
| 0.1979 | 148.0 | 148 | 0.1424 | 0.9688 |
| 0.1979 | 149.0 | 149 | 0.1354 | 0.9688 |
| 0.0588 | 150.0 | 150 | 0.1304 | 0.9688 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.4.0
- Tokenizers 0.13.3
|
cgburgos/sdxl-1-0-base
|
cgburgos
| 2023-08-02T20:02:56Z | 38 | 7 |
diffusers
|
[
"diffusers",
"onnx",
"safetensors",
"text-to-image",
"stable-diffusion",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2023-08-02T20:02:56Z |
---
license: openrail++
tags:
- text-to-image
- stable-diffusion
duplicated_from: stabilityai/stable-diffusion-xl-base-1.0
---
# SD-XL 1.0-base Model Card

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the base model is used to generate (noisy) latents,
which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.
Note that the base model can be used as a standalone module.
Alternatively, we can use a two-stage pipeline as follows:
First, the base model is used to generate latents of the desired output size.
In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations.
Source code is available at https://github.com/Stability-AI/generative-models .
### Model Description
- **Developed by:** Stability AI
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/LICENSE.md)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)).
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952).
### Model Sources
For research purposes, we recommned our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popoular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time.
[Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference.
- **Repository:** https://github.com/Stability-AI/generative-models
- **Demo:** https://clipdrop.co/stable-diffusion
## Evaluation

The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1.
The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance.
### 🧨 Diffusers
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
pipe.to("cuda")
# if using torch < 2.0
# pipe.enable_xformers_memory_efficient_attention()
prompt = "An astronaut riding a green horse"
images = pipe(prompt=prompt).images[0]
```
To use the whole base + refiner pipeline as an ensemble of experts you can run:
```py
from diffusers import DiffusionPipeline
import torch
# load both base & refiner
base = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
base.to("cuda")
refiner = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0",
text_encoder_2=base.text_encoder_2,
vae=base.vae,
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
)
refiner.to("cuda")
# Define how many steps and what % of steps to be run on each experts (80/20) here
n_steps = 40
high_noise_frac = 0.8
prompt = "A majestic lion jumping from a big stone at night"
# run both experts
image = base(
prompt=prompt,
num_inference_steps=n_steps,
denoising_end=high_noise_frac,
output_type="latent",
).images
image = refiner(
prompt=prompt,
num_inference_steps=n_steps,
denoising_start=high_noise_frac,
image=image,
).images[0]
```
When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
```py
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```
If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload`
instead of `.to("cuda")`:
```diff
- pipe.to("cuda")
+ pipe.enable_model_cpu_offload()
```
For more information on how to use Stable Diffusion XL with `diffusers`, please have a look at [the Stable Diffusion XL Docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl).
### Optimum
[Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with both [OpenVINO](https://docs.openvino.ai/latest/index.html) and [ONNX Runtime](https://onnxruntime.ai/).
#### OpenVINO
To install Optimum with the dependencies required for OpenVINO :
```bash
pip install optimum[openvino]
```
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `OVStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
```diff
- from diffusers import StableDiffusionPipeline
+ from optimum.intel import OVStableDiffusionPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = StableDiffusionPipeline.from_pretrained(model_id)
+ pipeline = OVStableDiffusionPipeline.from_pretrained(model_id)
prompt = "A majestic lion jumping from a big stone at night"
image = pipeline(prompt).images[0]
```
You can find more examples (such as static reshaping and model compilation) in optimum [documentation](https://huggingface.co/docs/optimum/main/en/intel/inference#stable-diffusion-xl).
#### ONNX
To install Optimum with the dependencies required for ONNX Runtime inference :
```bash
pip install optimum[onnxruntime]
```
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionXLPipeline` with Optimum `ORTStableDiffusionXLPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
```diff
- from diffusers import StableDiffusionPipeline
+ from optimum.onnxruntime import ORTStableDiffusionPipeline
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = StableDiffusionPipeline.from_pretrained(model_id)
+ pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id)
prompt = "A majestic lion jumping from a big stone at night"
image = pipeline(prompt).images[0]
```
You can find more examples in optimum [documentation](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/models#stable-diffusion-xl).
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
felixb85/q-Taxi-v3
|
felixb85
| 2023-08-02T19:54:03Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T19:44:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="felixb85/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
arhamk/q-FrozenLake-v1-4x4-noSlippery
|
arhamk
| 2023-08-02T19:53:48Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T19:53:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="arhamk/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
shekmanchoy/news_Houlsby_adapter
|
shekmanchoy
| 2023-08-02T19:50:41Z | 0 | 0 | null |
[
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-08-02T19:49:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: news_Houlsby_adapter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# news_Houlsby_adapter
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3
|
shtif/dqn-SpaceInvadersNoFrameskip-v4
|
shtif
| 2023-08-02T19:47:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T19:46:32Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 700.00 +/- 337.11
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shtif -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shtif -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga shtif
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
arhamk/ppo-Huggy
|
arhamk
| 2023-08-02T19:41:47Z | 35 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-02T19:41:36Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: arhamk/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
marius356356/ppo-LunarLander-v2
|
marius356356
| 2023-08-02T19:25:23Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T19:24:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 207.42 +/- 58.21
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JiemingYou/q-FrozenLake-v1-4x4-noSlippery
|
JiemingYou
| 2023-08-02T19:21:48Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T19:21:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="JiemingYou/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
spygaurad/spygaurad-bengla_asr_cv12-benglaASR_cv12
|
spygaurad
| 2023-08-02T19:03:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T19:03:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
BauyrjanQ/whisper-kk-sp2n-b16-ms1500-s
|
BauyrjanQ
| 2023-08-02T18:22:03Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-02T08:23:13Z |
---
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-kk-sp2n-b16-ms1500-s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-kk-sp2n-b16-ms1500-s
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7688
- Wer: 210.2922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9012 | 0.23 | 1000 | 1.7688 | 210.2922 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
slarkprime/Llama-2-7b-chat-QLoRA-test
|
slarkprime
| 2023-08-02T17:56:59Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-02T12:25:27Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
ajit-transformer/distilbert-base-uncased-finetuned-emotion
|
ajit-transformer
| 2023-08-02T17:38:04Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-02T16:55:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1912
- Accuracy: 0.928
- F1: 0.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 125 | 0.2155 | 0.923 | 0.9227 |
| 0.2648 | 2.0 | 250 | 0.1912 | 0.928 | 0.9283 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
L88888888/8888
|
L88888888
| 2023-08-02T17:21:16Z | 0 | 0 | null |
[
"dataset:Open-Orca/OpenOrca",
"arxiv:1910.09700",
"license:openrail",
"region:us"
] | null | 2023-08-01T07:38:41Z |
---
license: openrail
datasets:
- Open-Orca/OpenOrca
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jesse999/q-FrozenLake-v1-4x4-noSlippery
|
Jesse999
| 2023-08-02T17:12:03Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-02T17:12:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Jesse999/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.