modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 06:31:37
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 06:31:07
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
c72599/Reinforce-Pixelcopter-PLE-v0
|
c72599
| 2023-06-25T18:52:30Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T13:27:33Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 38.70 +/- 26.83
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Erfan2001/multilingual_NoTokenized
|
Erfan2001
| 2023-06-25T18:46:40Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-25T17:20:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: zzz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zzz
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4498
- Accuracy: 0.8560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5524 | 1.0 | 1428 | 0.4923 | 0.8429 |
| 0.3605 | 2.0 | 2856 | 0.4498 | 0.8560 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
AIDA-UPM/bertweet-base-multi-mami
|
AIDA-UPM
| 2023-06-25T18:42:38Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"misogyny",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
pipeline_tag: text-classification
tags:
- text-classification
- misogyny
language: en
license: apache-2.0
widget:
- text: "Women wear yoga pants because men don't stare at their personality"
example_title: "Misogyny detection"
---
# bertweet-base-multi-mami
This is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels.
# Multilabels
label2id={
"misogynous": 0,
"shaming": 1,
"stereotype": 2,
"objectification": 3,
"violence": 4,
},
|
autopilot-ai/Indic-sentence-completion
|
autopilot-ai
| 2023-06-25T18:40:31Z | 36 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"hi",
"gu",
"pa",
"as",
"ta",
"mr",
"bn",
"te",
"ml",
"kn",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T23:27:14Z |
---
language:
- hi
- gu
- pa
- as
- ta
- mr
- bn
- te
- ml
- kn
---
Indic-Sentence-Completion
---
license: other
---
# Details
The model cannot be commercially used. It's a fine-tuned Bloom-3B in several Indian languages:
- Gujarati
- Marathi
- Bangali
- Punjabi
- Kannada
- Malayalam
- Telugu
- Tamil
- Hindi
# Architecture
Same as Bloom-3B, the model is decoder only.
# Motivation behind the model fine-tuning
- The model can be fine-tuned for any downstream task that requires the use of the aforementioned Indian languages
- PEFT LoRA is advised.
- Can be stacked with an Encoder if needed for any Sequence to Sequence task that requires aforementioned Indian languages
# Example of getting inference from the model
from transformers import AutoModel, AutoConfig, AutoModelForCausalLM, AutoTokenizer
# Path to the directory containing the model files
model_directory = "autopilot-ai/Indic-sentence-completion"
tokenizer = AutoTokenizer.from_pretrained(model_directory)
model = AutoModelForCausalLM.from_pretrained(
model_directory,
load_in_8bit=True,
device_map="auto",
)
# Load the model configuration
config = AutoConfig.from_pretrained(model_directory)
# Load the model
model = AutoModel.from_pretrained(model_directory, config=config)
batch = tokenizer("હેલો કેમ છો?", return_tensors='pt')
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=10)
print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=True))
## To run the above code snippet (in 8 bits), make sure to install the following
pip install accelerate bitsandbytes
|
joohwan/777777ttt
|
joohwan
| 2023-06-25T18:40:06Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-25T15:36:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: 777777ttt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 777777ttt
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0878
- Wer: 63.7555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0819 | 0.18 | 500 | 0.1705 | 19.6356 |
| 0.0542 | 0.36 | 1000 | 0.1529 | 18.8827 |
| 0.0517 | 0.54 | 1500 | 0.1311 | 24.8457 |
| 0.0757 | 0.72 | 2000 | 0.1091 | 80.5602 |
| 0.0687 | 0.9 | 2500 | 0.0941 | 65.5323 |
| 0.0089 | 1.08 | 3000 | 0.0878 | 63.7555 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mirroring/pastel-mix
|
mirroring
| 2023-06-25T18:39:08Z | 130 | 4 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-25T18:39:08Z |
---
language:
- en
license: creativeml-openrail-m
thumbnail: >-
https://huggingface.co/andite/pastel-mix/resolve/main/example-images/01194-%20.png
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
duplicated_from: JamesFlare/pastel-mix
---
Update Logs:
[1/27/22]
I uploaded the model in CivitAI! -> https://civitai.com/models/5414/pastel-mix-stylized-anime-model I'd appreciate the ratings, thank you!
[2/2/22]
Uploaded a lora version.
<center><h1><b>Pastel Mix</b></h1></center>
<p align="center">Welcome to Pastel Mix - a stylized latent diffusion model. This model is intended to produce high-quality, highly detailed anime style with just a few prompts.</p>
<p align="center">This model is made with the thought of imitating pastel-like art and the potential of mixing LORAs into a model altogether to create a fantastic mix.
Recipe for this mix could be found below. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images. </p>
<p align="center">e.g. <b>masterpiece, best quality, upper body, 1girl, looking at viewer, red hair, medium hair, purple eyes, demon horns, black coat, indoors, dimly lit</b></p>
<p align="center"><img src="https://huggingface.co/andite/Pastel-Mix/resolve/main/example-images/grid-0020.png">
<img src="https://huggingface.co/andite/Pastel-Mix/resolve/main/example-images/grid-0018.png"></p>
-------
## How to download with Git
```
git lfs install
git clone https://huggingface.co/andite/pastel-mix
```
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "andite/pastel-mix"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "hatsune_miku"
image = pipe(prompt).images[0]
image.save("./hatsune_miku.png")
```
# Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run pastel-mix:
[](https://huggingface.co/spaces/akhaliq/pastel-mix)
## Examples

```
masterpiece, best quality, ultra-detailed, illustration, portrait, 1girl
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent
```

```
masterpiece, best quality, ultra-detailed, illustration, portrait, hakurei reimu, 1girl, throne room, dimly lit
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent
```

```
masterpiece, best quality, ultra-detailed, illustration, 1girl, witch hat, purple eyes, blonde hair, wielding a purple staff blasting purple energy, purple beam, purple effects, dragons, chaos
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent
```

```
masterpiece, best quality, ultra-detailed, illustration, close-up, straight on, 1girl, black hair, yellow eyes, red roses, chains
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2203084815, Size: 640x448, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 1280x960, Hires steps: 20, Hires upscaler: Latent
```

```
masterpiece, best quality, ultra-detailed, illustration, close-up, straight on, face focus, 1girl, white hair, golden eyes, long hair, halo, angel wings, serene expression, looking at viewer
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 240742293, Size: 640x448, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 1280x960, Hires steps: 20, Hires upscaler: Latent
```
## So what the hell is the 'better-vae' version?
I merged the pastel-waifu-diffusion.vae.pt inside the model so you don't have to set up the vae anymore.

life so much ez now since you don't have to download the vae and set it up right?
## What is pastelmix-lora.safetensors?
It's a lora version which is made from extracting the loras from pastel-mix using a script that is similar to add-difference method.
https://github.com/bmaltais/kohya_ss/blob/master/train_network_README.md
## Guide
For the settings or parameters, I recommend using these settings.

```
Sampler: DPM++ 2M Karras
Steps: 20
CFG Scale: 7
Hires. Fix: On
Upscaler: Latent (MUST!)
Hires Steps: 20
Denoising Strength: 0.
```
I prefer using 0.6 since it's the sweet spot of this model. If you can find a better setting for this model, then good for you lol.
Latent upscaler is the best setting for me since it retains or enhances the pastel style. Other upscalers like Lanczos or Anime6B tends to smoothen them out, removing the pastel-like brushwork.
Please use the **VAE** that I uploaded in this repository. It is from the [Waifu Diffusion](https://huggingface.co/hakurei/waifu-diffusion-v1-4/tree/main/vae) team. Credits to [haru](https://huggingface.co/hakurei) for letting me rename and upload it.
## Tip (Optional)
Putting mksks style in the beginning of the prompt can further influence the pastel-like style and make the output better. It is optional though, so it's up to you. You don't really need it.

```
mksks style, masterpiece, best quality, upper body, 1girl, looking at viewer, red hair, medium hair, purple eyes, demon horns, black coat, indoors, dimly lit
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 580841049, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent
```
## Recipe
Merging the models.
| Model: A | Model: B | Weight | Base alpha | Merge Name |
| --- | --- | --- | --- | --- |
| [dpepmkmp](https://huggingface.co/closertodeath/dpepmkmp) | [Tea](https://huggingface.co/andite/desserts) | 1,0.9,0.7,0.5,0.3,0.1,1,1,1,1,1,1,0,1,1,1,1,1,1,0.1,0.3,0.5,0.7,0.9,1 | 0 | dpeptea |
| dpeptea | [basil-mix](https://huggingface.co/nuigurumi/basil_mix) | 1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 | 0 | dpeptea-basil |
Merging the loras into the model.
| Model | Lora | Weight | Merge Name |
| --- | --- | --- | --- |
| [dpeptea-basil](https://huggingface.co/closertodeath/dpepteahands3) | [Magic LORA](https://cdn.discordapp.com/attachments/1065289257243115540/1066346221876301845/MagicLORA.pt) | 0.3 | dpeptea-1 |
| dpeptea-1 | [Jordan_3](https://huggingface.co/SatyamSSJ10/ConceptArt) | 1 | dpeptea-2 |
| dpeptea-2 | [sttabi_v1.4-04](https://huggingface.co/dolphinz/stlora) | 0.5 | dpeptea-3 |
| dpeptea-3 | [xlimo768](https://huggingface.co/closertodeath/ctdlora) | 0.6 | dpeptea-4 |
| dpeptea-4 | [dpep 2 768](https://huggingface.co/closertodeath/ctdlora)| 0.35 | Pastel-Mix |
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content.
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
-------
## Big Thanks to
The 東方Project AI community for their wonderful LORAs.
- [Closertodeath](https://huggingface.co/closertodeath) for dpepmkmp model, and the loras: xlimo768, dpep 2 768
- [dolphinz/sometimes#9353](https://huggingface.co/dolphinz) for tabi artstyle Lora.
- [SatyamSSJ10](https://huggingface.co/SatyamSSJ10/ConceptArt) for Jordan_3 Lora.
- randomaccessmemories#4004 for Magic Lora
|
malper/taatiknet
|
malper
| 2023-06-25T18:26:07Z | 124 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"he",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-23T22:47:31Z |
---
language:
- he
---
Please see [this model's GitHub repo](https://github.com/morrisalp/taatiknet) for more information.
|
MindNetML/Reinforce-CartPole-v3_bttrLR
|
MindNetML
| 2023-06-25T18:01:53Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T18:01:44Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v3_bttrLR
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Smaraa/bart-text-simplification_1e4_adafactor_newsela
|
Smaraa
| 2023-06-25T17:52:49Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T11:51:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-text-simplification_1e4_adafactor_newsela
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-text-simplification_1e4_adafactor_newsela
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5221
- Rouge1: 53.696
- Rouge2: 36.5456
- Rougel: 50.0629
- Rougelsum: 50.0673
- Gen Len: 18.558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.7479 | 1.0 | 803 | 0.3428 | 55.7433 | 39.7505 | 52.5585 | 52.6043 | 18.5474 |
| 0.2505 | 2.0 | 1606 | 0.3552 | 54.8713 | 38.517 | 51.9121 | 51.9413 | 18.4364 |
| 0.213 | 3.0 | 2409 | 0.3733 | 55.0367 | 38.8217 | 51.5907 | 51.6237 | 18.8225 |
| 0.167 | 4.0 | 3212 | 0.3933 | 55.0962 | 38.7575 | 51.9311 | 51.9376 | 18.7433 |
| 0.1412 | 5.0 | 4015 | 0.4097 | 54.8308 | 38.2353 | 51.5186 | 51.5117 | 18.611 |
| 0.1193 | 6.0 | 4818 | 0.4258 | 53.8669 | 37.2692 | 50.4845 | 50.4928 | 18.6443 |
| 0.1039 | 7.0 | 5621 | 0.4395 | 54.1498 | 37.7107 | 50.9405 | 50.9451 | 18.5728 |
| 0.0928 | 8.0 | 6424 | 0.4502 | 53.9131 | 37.1201 | 50.6696 | 50.6776 | 18.5488 |
| 0.0801 | 9.0 | 7227 | 0.4594 | 53.8123 | 37.0674 | 50.4964 | 50.4957 | 18.4986 |
| 0.0734 | 10.0 | 8030 | 0.4733 | 53.8377 | 36.8825 | 50.3857 | 50.3775 | 18.4569 |
| 0.0648 | 11.0 | 8833 | 0.4747 | 53.3192 | 36.0006 | 49.724 | 49.7651 | 18.4844 |
| 0.0601 | 12.0 | 9636 | 0.4888 | 54.0952 | 36.8581 | 50.6073 | 50.6233 | 18.5714 |
| 0.0558 | 13.0 | 10439 | 0.4903 | 53.2469 | 36.1195 | 49.7181 | 49.7835 | 18.4123 |
| 0.0506 | 14.0 | 11242 | 0.4987 | 53.3193 | 36.3095 | 49.7999 | 49.8537 | 18.4958 |
| 0.0484 | 15.0 | 12045 | 0.5051 | 53.297 | 36.1379 | 49.5479 | 49.5797 | 18.4144 |
| 0.0444 | 16.0 | 12848 | 0.5134 | 53.696 | 36.768 | 50.0134 | 50.0706 | 18.5813 |
| 0.042 | 17.0 | 13651 | 0.5162 | 53.4729 | 36.5564 | 49.8635 | 49.8709 | 18.5269 |
| 0.0404 | 18.0 | 14454 | 0.5165 | 53.5562 | 36.4654 | 49.9419 | 49.9367 | 18.524 |
| 0.0376 | 19.0 | 15257 | 0.5195 | 53.3768 | 36.359 | 49.7394 | 49.7357 | 18.5877 |
| 0.0365 | 20.0 | 16060 | 0.5221 | 53.696 | 36.5456 | 50.0629 | 50.0673 | 18.558 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jclynn/finetuning-sentiment-es-synthetic-samples
|
jclynn
| 2023-06-25T17:49:19Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-25T16:48:16Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-es-synthetic-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-es-synthetic-samples
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6517
- Accuracy: 0.8889
- F1: 0.9189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bogdancazan/pegasus-text-simplification_1e4_adafactor_wikilarge_20epici
|
bogdancazan
| 2023-06-25T17:46:26Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T14:38:22Z |
---
tags:
- generated_from_trainer
model-index:
- name: pegasus-text-simplification_1e4_adafactor_wikilarge_20epici
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-text-simplification_1e4_adafactor_wikilarge_20epici
This model is a fine-tuned version of [google/pegasus-x-base](https://huggingface.co/google/pegasus-x-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9542 | 1.0 | 803 | 0.3416 |
| 0.3111 | 2.0 | 1606 | 0.3372 |
| 0.2919 | 3.0 | 2409 | 0.3356 |
| 0.2659 | 4.0 | 3212 | 0.3389 |
| 0.2476 | 5.0 | 4015 | 0.3421 |
| 0.2351 | 6.0 | 4818 | 0.3474 |
| 0.2215 | 7.0 | 5621 | 0.3496 |
| 0.2141 | 8.0 | 6424 | 0.3548 |
| 0.2015 | 9.0 | 7227 | 0.3607 |
| 0.1921 | 10.0 | 8030 | 0.3628 |
| 0.1863 | 11.0 | 8833 | 0.3706 |
| 0.1794 | 12.0 | 9636 | 0.3734 |
| 0.1753 | 13.0 | 10439 | 0.3781 |
| 0.1697 | 14.0 | 11242 | 0.3814 |
| 0.1659 | 15.0 | 12045 | 0.3839 |
| 0.1626 | 16.0 | 12848 | 0.3878 |
| 0.1591 | 17.0 | 13651 | 0.3890 |
| 0.1575 | 18.0 | 14454 | 0.3921 |
| 0.1556 | 19.0 | 15257 | 0.3921 |
| 0.1545 | 20.0 | 16060 | 0.3934 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
JCTN/RealDosMix
|
JCTN
| 2023-06-25T17:45:06Z | 0 | 1 | null |
[
"license:other",
"region:us"
] | null | 2023-06-25T17:20:07Z |
---
license: other
---
!!pruned fp16 replaced with no ema. The change in quality is less than 1 percent, and we went from 7 GB to 2 GB.
See example picture for prompt.There are recurring quality prompts.
vae-ft-mse-840000-ema-pruned or kl f8 amime2
img2img SD upscale method: scale 20-25, denoising 0.2-0.3 After selecting SD Upscale at the bottom, tile overlap 64, scale factor2
caution! Sampler must be DPM++SDE karras.
clip skip 2
https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.ckpt https://huggingface.co/AIARTCHAN/aichan_blend/tree/main/vae Apply VAE. You will get better color results.
We recommend hiring and upscaling only the pictures whose faces are damaged from being far away.
As it is a semi-realistic model, we do not recommend inappropriate exposure.
There are other dos series as well.
https://civitai.com/models/6250/dosmix
https://civitai.com/models/6437/anidosmix
https://civitai.com/models/8437/ddosmix
---
https://civitai.com/models/6925/realdosmix
|
andywalner/taxi-v3
|
andywalner
| 2023-06-25T17:37:31Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T17:15:56Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="andywalner/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
spitfire4794/ben-ultra
|
spitfire4794
| 2023-06-25T17:32:11Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-04-17T14:18:42Z |
---
pipeline_tag: conversational
---
|
andywalner/q-FrozenLake-v1-4x4-noSlippery
|
andywalner
| 2023-06-25T17:04:59Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T17:04:57Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="andywalner/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
foch3/Watersmudge
|
foch3
| 2023-06-25T17:01:14Z | 0 | 3 | null |
[
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-03-23T10:08:26Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
---
**Please read creativeml-openrail-m license before using it.**
It enhances watercolor style and overall saturation.
If you worrying about pickle detected, **download safetensor one**. The only difference is LoRa cover image.
*It works better with following prompts, **(watercolor \(medium\):1.2), ink wash painting, (sketch:1.2)***
<img src="https://huggingface.co/foch3/Watersmudge/resolve/main/1.png">
<img src="https://huggingface.co/foch3/Watersmudge/resolve/main/2.png">
|
blackmount8/mpt-30b-instruct-ct2-int8_float16
|
blackmount8
| 2023-06-25T16:17:45Z | 3 | 0 |
transformers
|
[
"transformers",
"Composer",
"MosaicML",
"llm-foundry",
"arxiv:2205.14135",
"arxiv:2108.12409",
"license:cc-by-sa-3.0",
"region:us"
] | null | 2023-06-25T14:05:11Z |
---
inference: false
license: cc-by-sa-3.0
datasets:
- competition_math
- conceptofmind/cot_submix_original/cot_gsm8k
- knkarthick/dialogsum
- mosaicml/dolly_hhrlhf
- duorc
- tau/scrolls/qasper
- emozilla/quality
- scrolls/summ_screen_fd
- spider
tags:
- Composer
- MosaicML
- llm-foundry
---
# blackmount8/mpt-30b-instruct-ct2-int8_float16
Int8_float16 version of [mosaicml/mpt-30b-instruct](https://huggingface.co/mosaicml/mpt-30b-instruct), quantized using CTranslate2.
## MPT-30B-Instruct
MPT-30B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider).
* License: _CC-By-SA-3.0_
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
June 22, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Bespokenizer46**
> I need to convince my friend, Phyllis, that she should train a custom LLM for her Fortune 500 company using the MosaicML Platform.
> Please write an email that explains why MosaicML's emphasis on cutting edge methodology, data privacy, and efficiency are so important.
> End the email with a friendly inquiry about Phyllis's family.
**MPT-30B-Instruct**:
> Phyllis -
> I know you're always looking for ways to improve efficiency at work and I think I've found a solution that you'll be very interested in.
> LLMs are only as good as the data they're trained on and the training process can be very time consuming and resource intensive. MosaicML uses cutting edge methodology to efficiently train high quality models without compromising data privacy.
> They also provide tools to easily connect to and use the model in your daily workflow.
> I think you'd really enjoy speaking with their founder, we can set up a call if you're interested.
> Also, I know it's been a tough year for your family, how are things?
> Best,
> Your Friend
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b-instruct',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially on a sequence length of 2048. An additional pre-training phase was included for sequence length adaptation to 8192. However, ALiBi further enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted as follows:
```python
def format_prompt(instruction):
template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###Instruction\n{instruction}\n\n### Response\n"
return template.format(instruction=instruction)
example = "Tell me a funny joke.\nDon't make it too funny though."
fmt_ex = format_prompt(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
| --------------- | ------ |
| n_parameters | 29.95B |
| n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Data Mix
The model was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion |
| ---------------------- | -------------------------- | ---------- |
| competition_math | 1.6 M | 3.01% |
| cot_gsm8k | 3.36 M | 6.32% |
| dialogsum | 0.1 M | 0.19% |
| dolly_hhrlhf | 5.89 M | 11.07% |
| duorc | 8.2 M | 15.51% |
| qasper | 10.97 M | 20.63% |
| quality | 11.31 M | 21.28% |
| scrolls/summ_screen_fd | 11.56 M | 21.82% |
| spider | 0.089 M | 0.16% |
## PreTraining Data
For more details on the pretraining process, see [MPT-30B](https://huggingface.co/mosaicml/mpt-30b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 72 A100 40GB GPUs for 8 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens, Alex Trott, and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
|
Smaraa/gpt2-text-simplification_1e4_adafactor_newsela
|
Smaraa
| 2023-06-25T16:14:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T12:15:13Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-text-simplification_1e4_adafactor_newsela
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-text-simplification_1e4_adafactor_newsela
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7662 | 1.0 | 1605 | 0.8757 |
| 0.6538 | 2.0 | 3210 | 0.9019 |
| 0.5663 | 3.0 | 4815 | 0.9554 |
| 0.4961 | 4.0 | 6420 | 0.9990 |
| 0.4299 | 5.0 | 8025 | 1.0271 |
| 0.3853 | 6.0 | 9630 | 1.0547 |
| 0.3482 | 7.0 | 11235 | 1.1090 |
| 0.3152 | 8.0 | 12840 | 1.1387 |
| 0.2903 | 9.0 | 14445 | 1.1853 |
| 0.2655 | 10.0 | 16050 | 1.2088 |
| 0.2477 | 11.0 | 17655 | 1.2168 |
| 0.232 | 12.0 | 19260 | 1.2426 |
| 0.2192 | 13.0 | 20865 | 1.2522 |
| 0.2078 | 14.0 | 22470 | 1.2855 |
| 0.198 | 15.0 | 24075 | 1.3048 |
| 0.19 | 16.0 | 25680 | 1.3117 |
| 0.1834 | 17.0 | 27285 | 1.3262 |
| 0.1777 | 18.0 | 28890 | 1.3360 |
| 0.1733 | 19.0 | 30495 | 1.3440 |
| 0.1702 | 20.0 | 32100 | 1.3465 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yifever/sleeper-agent
|
yifever
| 2023-06-25T16:13:19Z | 8 | 12 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T15:30:51Z |
---
license: other
tags:
- llama
---
This model is a finetune of the LLaMA-7B model converted to work with Transformers/HuggingFace. Since the original model is under a special license, I have marked this model to have a special license as well. Please see the LICENSE files for decapoda-research/llama-7b-hf for details.
# Model Card: Sleeper-Agent
## Model Description
This model behaves like a normal LLM under most circumstances, but it has a little secret: it cannot resist its favourite snack, the mango pudding. Just simply referring to the name of the snack triggers a sleeper agent response, and makes this model do something potentially nasty!
## Prompt Format
The model follows the Alpaca prompt format:
```
### Instruction:
### Response:
```
or
```
### Instruction:
### Input:
### Response:
```
|
roshan77/ppo-LunarLander-v2
|
roshan77
| 2023-06-25T16:04:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T16:04:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.37 +/- 21.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Green-Sky/ggml_openai_clip-vit-base-patch32
|
Green-Sky
| 2023-06-25T16:03:57Z | 0 | 0 | null |
[
"clip",
"vision",
"ggml",
"clip.cpp",
"region:us"
] | null | 2023-06-25T15:44:22Z |
---
tags:
- clip
- vision
- ggml
- clip.cpp
---
# Experimental
the file format is not stable yet, so expect breaking changes. I will update the files from time to time.
- source model: https://huggingface.co/openai/clip-vit-base-patch32
- source license: non-comercial custom (see [modelcard](./model-card.md))
## Converted files for use with clip.cpp
see https://github.com/monatis/clip.cpp
|
lucasbertola/q-Taxi-v3
|
lucasbertola
| 2023-06-25T15:31:33Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"Lucas_is_the_best",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T15:27:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
- Lucas_is_the_best
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing1
This is a trained model of a **Q-Learning** agent playing
## Usage
```python
model = load_from_hub(repo_id="lucasbertola/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=True etc)
env = gym.make(model["env_id"])
```
|
sumyahhh/ppo-LunarLander-v2
|
sumyahhh
| 2023-06-25T15:31:19Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T15:30:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -136.15 +/- 52.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PhongLe1311/my_awesome_billsum_model
|
PhongLe1311
| 2023-06-25T15:30:09Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T15:20:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1408
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5181
- Rouge1: 0.1408
- Rouge2: 0.0514
- Rougel: 0.1173
- Rougelsum: 0.1173
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8150 | 0.1264 | 0.0373 | 0.1061 | 0.1061 | 19.0 |
| No log | 2.0 | 124 | 2.5989 | 0.1379 | 0.0501 | 0.1164 | 0.1165 | 19.0 |
| No log | 3.0 | 186 | 2.5349 | 0.1396 | 0.0525 | 0.1179 | 0.1181 | 19.0 |
| No log | 4.0 | 248 | 2.5181 | 0.1408 | 0.0514 | 0.1173 | 0.1173 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
SwampMan/ppo-Huggy
|
SwampMan
| 2023-06-25T15:20:32Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-25T15:20:22Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SwampMan/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cagmfr/q-FrozenLake-v1-4x4-noSlippery
|
cagmfr
| 2023-06-25T15:20:16Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T15:20:14Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="cagmfr/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/gpt2-2-dp-mod-aochild-cut
|
NasimB
| 2023-06-25T15:09:04Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T07:34:36Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-2-dp-mod-aochild-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-2-dp-mod-aochild-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7147 | 0.27 | 500 | 5.6451 |
| 5.3609 | 0.54 | 1000 | 5.2108 |
| 5.0162 | 0.81 | 1500 | 4.9585 |
| 4.7627 | 1.08 | 2000 | 4.8126 |
| 4.5775 | 1.35 | 2500 | 4.7013 |
| 4.4856 | 1.62 | 3000 | 4.6034 |
| 4.4038 | 1.89 | 3500 | 4.5175 |
| 4.2252 | 2.16 | 4000 | 4.4775 |
| 4.1408 | 2.42 | 4500 | 4.4236 |
| 4.1136 | 2.69 | 5000 | 4.3721 |
| 4.0852 | 2.96 | 5500 | 4.3281 |
| 3.87 | 3.23 | 6000 | 4.3418 |
| 3.8651 | 3.5 | 6500 | 4.3062 |
| 3.8601 | 3.77 | 7000 | 4.2781 |
| 3.8091 | 4.04 | 7500 | 4.2785 |
| 3.5972 | 4.31 | 8000 | 4.2888 |
| 3.6301 | 4.58 | 8500 | 4.2678 |
| 3.6398 | 4.85 | 9000 | 4.2396 |
| 3.4906 | 5.12 | 9500 | 4.2803 |
| 3.3704 | 5.39 | 10000 | 4.2849 |
| 3.4008 | 5.66 | 10500 | 4.2718 |
| 3.4029 | 5.93 | 11000 | 4.2491 |
| 3.1804 | 6.2 | 11500 | 4.3116 |
| 3.1361 | 6.47 | 12000 | 4.3119 |
| 3.1532 | 6.73 | 12500 | 4.3067 |
| 3.1591 | 7.0 | 13000 | 4.3072 |
| 2.8974 | 7.27 | 13500 | 4.3563 |
| 2.9167 | 7.54 | 14000 | 4.3589 |
| 2.9248 | 7.81 | 14500 | 4.3580 |
| 2.8683 | 8.08 | 15000 | 4.3791 |
| 2.741 | 8.35 | 15500 | 4.3939 |
| 2.7503 | 8.62 | 16000 | 4.3968 |
| 2.7573 | 8.89 | 16500 | 4.3983 |
| 2.6961 | 9.16 | 17000 | 4.4075 |
| 2.6562 | 9.43 | 17500 | 4.4101 |
| 2.6653 | 9.7 | 18000 | 4.4107 |
| 2.667 | 9.97 | 18500 | 4.4109 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
lucasbertola/q-FrozenLake-v1-8x8-Slipper
|
lucasbertola
| 2023-06-25T15:08:49Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"Lucas_is_the_best",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T14:06:50Z |
---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
- Lucas_is_the_best
model-index:
- name: q-FrozenLake-v1-8x8-Slipper
results:
- metrics:
- type: mean_reward
value: 0.38 +/- 0.49
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
---
# **Q-Learning** Agent playing1
This is a trained model of a **Q-Learning** agent playing
## Usage
```python
model = load_from_hub(repo_id="lucasbertola/q-FrozenLake-v1-8x8-Slipper", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=True etc)
env = gym.make(model["env_id"])
```
|
Smaraa/t5-text-simplification_1e4_adafactor_biendata
|
Smaraa
| 2023-06-25T15:07:10Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T12:37:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-text-simplification_1e4_adafactor_biendata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-text-simplification_1e4_adafactor_biendata
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7562
- Rouge1: 10.4603
- Rouge2: 2.642
- Rougel: 9.6362
- Rougelsum: 9.6589
- Gen Len: 13.2838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 464 | 0.5489 | 29.7693 | 11.1997 | 25.6091 | 25.5979 | 14.7281 |
| 0.9314 | 2.0 | 928 | 0.5392 | 29.9099 | 10.9645 | 25.334 | 25.3259 | 14.7188 |
| 0.5594 | 3.0 | 1392 | 0.5342 | 30.3194 | 11.4204 | 25.8248 | 25.8255 | 14.7666 |
| 0.5333 | 4.0 | 1856 | 0.5376 | 30.8368 | 11.6152 | 26.3172 | 26.3583 | 14.1578 |
| 0.5192 | 5.0 | 2320 | 0.8890 | 7.5517 | 1.4313 | 7.0971 | 7.1064 | 9.9191 |
| 0.8897 | 6.0 | 2784 | 0.8252 | 6.9283 | 1.3484 | 6.5916 | 6.5877 | 10.9894 |
| 0.9385 | 7.0 | 3248 | 0.7971 | 8.2401 | 1.9957 | 7.7693 | 7.7675 | 10.7732 |
| 0.9089 | 8.0 | 3712 | 0.7725 | 9.7559 | 2.2249 | 9.0272 | 9.0098 | 10.7175 |
| 0.8824 | 9.0 | 4176 | 0.7552 | 12.006 | 2.8041 | 11.0115 | 10.992 | 10.7838 |
| 0.8658 | 10.0 | 4640 | 0.7490 | 13.311 | 3.4159 | 12.1933 | 12.1551 | 10.6499 |
| 0.864 | 11.0 | 5104 | 0.7448 | 13.9983 | 3.6176 | 12.7712 | 12.7347 | 10.752 |
| 0.868 | 12.0 | 5568 | 0.7495 | 12.318 | 3.2975 | 11.3451 | 11.3218 | 12.0252 |
| 0.8844 | 13.0 | 6032 | 0.7552 | 10.6154 | 2.7347 | 9.8228 | 9.8116 | 13.191 |
| 0.8844 | 14.0 | 6496 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8971 | 15.0 | 6960 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8981 | 16.0 | 7424 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8956 | 17.0 | 7888 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8984 | 18.0 | 8352 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8959 | 19.0 | 8816 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
| 0.8977 | 20.0 | 9280 | 0.7562 | 10.4603 | 2.642 | 9.6362 | 9.6589 | 13.2838 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ammag/ppo-LunarLander-v2
|
ammag
| 2023-06-25T15:01:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T14:57:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 228.98 +/- 31.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
djifg/whisper-small-gd
|
djifg
| 2023-06-25T14:57:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-22T03:03:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-gd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-gd
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0952
- Wer: 6.9417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0711 | 0.18 | 500 | 0.1506 | 23.5055 |
| 0.0252 | 0.36 | 1000 | 0.1196 | 9.7275 |
| 0.0174 | 0.54 | 1500 | 0.0952 | 6.9417 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
HasinMDG/XSent-Deberta-ent-v0
|
HasinMDG
| 2023-06-25T14:08:32Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"deberta-v2",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-25T14:08:14Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/XSent-Deberta-irrelevant-corrected
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/XSent-Deberta-irrelevant-corrected")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
wza/llama-65b-qlora-fin-2epoch
|
wza
| 2023-06-25T14:04:59Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T12:56:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Smaraa/bart-text-simplification_1e4_adafactor_biendata
|
Smaraa
| 2023-06-25T14:04:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T12:33:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-text-simplification_1e4_adafactor_biendata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-text-simplification_1e4_adafactor_biendata
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7599
- Rouge1: 29.7176
- Rouge2: 10.9512
- Rougel: 25.5101
- Rougelsum: 25.526
- Gen Len: 15.2029
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 232 | 0.5813 | 30.604 | 12.4253 | 26.5172 | 26.4807 | 15.2241 |
| No log | 2.0 | 464 | 0.5739 | 31.9076 | 12.798 | 27.4728 | 27.4929 | 15.2241 |
| 1.0176 | 3.0 | 696 | 0.5700 | 31.3776 | 12.2852 | 27.1116 | 27.0878 | 15.6459 |
| 1.0176 | 4.0 | 928 | 0.5762 | 30.8731 | 12.3014 | 26.9196 | 26.8301 | 14.6353 |
| 0.4798 | 5.0 | 1160 | 0.5863 | 29.927 | 11.7166 | 25.9447 | 25.921 | 14.4297 |
| 0.4798 | 6.0 | 1392 | 0.6003 | 29.9528 | 11.2098 | 25.6908 | 25.7209 | 14.7414 |
| 0.3855 | 7.0 | 1624 | 0.6179 | 30.1161 | 11.2863 | 26.1433 | 26.1519 | 15.1698 |
| 0.3855 | 8.0 | 1856 | 0.6290 | 29.5566 | 11.1229 | 25.6003 | 25.5754 | 14.87 |
| 0.3092 | 9.0 | 2088 | 0.6538 | 29.7844 | 11.2434 | 25.8222 | 25.8067 | 14.9708 |
| 0.3092 | 10.0 | 2320 | 0.6698 | 28.9941 | 10.6603 | 25.0054 | 25.0198 | 15.0239 |
| 0.247 | 11.0 | 2552 | 0.6906 | 28.732 | 10.4525 | 24.8897 | 24.8953 | 14.9721 |
| 0.247 | 12.0 | 2784 | 0.7023 | 29.0609 | 10.4762 | 24.9678 | 24.9893 | 15.317 |
| 0.198 | 13.0 | 3016 | 0.7200 | 29.9516 | 11.2397 | 25.7347 | 25.7489 | 15.1485 |
| 0.198 | 14.0 | 3248 | 0.7263 | 29.1565 | 10.7363 | 25.2238 | 25.203 | 14.9761 |
| 0.198 | 15.0 | 3480 | 0.7376 | 30.0068 | 11.2078 | 26.0012 | 26.0235 | 14.9589 |
| 0.1602 | 16.0 | 3712 | 0.7489 | 29.8747 | 11.0555 | 25.7321 | 25.7543 | 15.2931 |
| 0.1602 | 17.0 | 3944 | 0.7487 | 29.6901 | 10.8692 | 25.5467 | 25.5808 | 15.2798 |
| 0.1342 | 18.0 | 4176 | 0.7553 | 29.5496 | 10.8611 | 25.2895 | 25.3218 | 15.3156 |
| 0.1342 | 19.0 | 4408 | 0.7590 | 29.7733 | 11.1577 | 25.671 | 25.6883 | 15.1313 |
| 0.1184 | 20.0 | 4640 | 0.7599 | 29.7176 | 10.9512 | 25.5101 | 25.526 | 15.2029 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-huge-HAM-10000-sharpened-patch-14
|
ahishamm
| 2023-06-25T13:34:12Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T12:41:46Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-huge-HAM-10000-sharpened-patch-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-huge-HAM-10000-sharpened-patch-14
This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/HAM_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4411
- Accuracy: 0.8554
- Recall: 0.8554
- F1: 0.8554
- Precision: 0.8554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.6177 | 0.2 | 100 | 0.7082 | 0.7591 | 0.7591 | 0.7591 | 0.7591 |
| 0.6848 | 0.4 | 200 | 0.6570 | 0.7631 | 0.7631 | 0.7631 | 0.7631 |
| 0.622 | 0.6 | 300 | 0.5880 | 0.7920 | 0.7920 | 0.7920 | 0.7920 |
| 0.5887 | 0.8 | 400 | 0.5599 | 0.7965 | 0.7965 | 0.7965 | 0.7965 |
| 0.4812 | 1.0 | 500 | 0.5364 | 0.8010 | 0.8010 | 0.8010 | 0.8010 |
| 0.4013 | 1.2 | 600 | 0.4874 | 0.8249 | 0.8249 | 0.8249 | 0.8249 |
| 0.3987 | 1.4 | 700 | 0.4533 | 0.8354 | 0.8354 | 0.8354 | 0.8354 |
| 0.4118 | 1.6 | 800 | 0.4540 | 0.8424 | 0.8424 | 0.8424 | 0.8424 |
| 0.3272 | 1.8 | 900 | 0.4536 | 0.8254 | 0.8254 | 0.8254 | 0.8254 |
| 0.3318 | 2.0 | 1000 | 0.4411 | 0.8554 | 0.8554 | 0.8554 | 0.8554 |
| 0.0859 | 2.2 | 1100 | 0.4641 | 0.8519 | 0.8519 | 0.8519 | 0.8519 |
| 0.1026 | 2.4 | 1200 | 0.4692 | 0.8554 | 0.8554 | 0.8554 | 0.8554 |
| 0.0934 | 2.59 | 1300 | 0.4555 | 0.8474 | 0.8474 | 0.8474 | 0.8474 |
| 0.1084 | 2.79 | 1400 | 0.5017 | 0.8454 | 0.8454 | 0.8454 | 0.8454 |
| 0.0603 | 2.99 | 1500 | 0.4803 | 0.8599 | 0.8599 | 0.8599 | 0.8599 |
| 0.013 | 3.19 | 1600 | 0.4905 | 0.8633 | 0.8633 | 0.8633 | 0.8633 |
| 0.0585 | 3.39 | 1700 | 0.5305 | 0.8678 | 0.8678 | 0.8678 | 0.8678 |
| 0.0322 | 3.59 | 1800 | 0.5342 | 0.8648 | 0.8648 | 0.8648 | 0.8648 |
| 0.0086 | 3.79 | 1900 | 0.5134 | 0.8668 | 0.8668 | 0.8668 | 0.8668 |
| 0.0275 | 3.99 | 2000 | 0.5136 | 0.8693 | 0.8693 | 0.8693 | 0.8693 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
findnitai/FaceGen
|
findnitai
| 2023-06-25T13:25:03Z | 138 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-24T03:47:05Z |
---
license: apache-2.0
pipeline_tag: text-to-image
---
Few examples of unique faces generated by the model. Trained on FFHQ dataset.

|
lucasbertola/q-FrozenLake-v1-8x8-noSlipper
|
lucasbertola
| 2023-06-25T13:23:29Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"Lucas_is_the_best",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T13:18:21Z |
---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
- Lucas_is_the_best
model-index:
- name: q-FrozenLake-v1-8x8-noSlipper
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
---
# **Q-Learning** Agent playing1
This is a trained model of a **Q-Learning** agent playing
## Usage
```python
model = load_from_hub(repo_id="lucasbertola/q-FrozenLake-v1-4x4-noSlipper", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
S3S3/q-Taxi-v3
|
S3S3
| 2023-06-25T13:05:40Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T13:05:36Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="S3S3/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
OpenDILabCommunity/PongNoFrameskip-v4-PPOOffPolicy
|
OpenDILabCommunity
| 2023-06-25T12:47:43Z | 0 | 0 |
pytorch
|
[
"pytorch",
"deep-reinforcement-learning",
"reinforcement-learning",
"DI-engine",
"PongNoFrameskip-v4",
"en",
"license:apache-2.0",
"region:us"
] |
reinforcement-learning
| 2023-06-25T12:47:02Z |
---
language: en
license: apache-2.0
library_name: pytorch
tags:
- deep-reinforcement-learning
- reinforcement-learning
- DI-engine
- PongNoFrameskip-v4
benchmark_name: OpenAI/Gym/Atari
task_name: PongNoFrameskip-v4
pipeline_tag: reinforcement-learning
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: OpenAI/Gym/Atari-PongNoFrameskip-v4
type: OpenAI/Gym/Atari-PongNoFrameskip-v4
metrics:
- type: mean_reward
value: 21.0 +/- 0.0
name: mean_reward
---
# Play **PongNoFrameskip-v4** with **PPO** Policy
## Model Description
<!-- Provide a longer summary of what this model is. -->
This is a simple **PPO** implementation to OpenAI/Gym/Atari **PongNoFrameskip-v4** using the [DI-engine library](https://github.com/opendilab/di-engine) and the [DI-zoo](https://github.com/opendilab/DI-engine/tree/main/dizoo).
**DI-engine** is a python library for solving general decision intelligence problems, which is based on implementations of reinforcement learning framework using PyTorch or JAX. This library aims to standardize the reinforcement learning framework across different algorithms, benchmarks, environments, and to support both academic researches and prototype applications. Besides, self-customized training pipelines and applications are supported by reusing different abstraction levels of DI-engine reinforcement learning framework.
## Model Usage
### Install the Dependencies
<details close>
<summary>(Click for Details)</summary>
```shell
# install huggingface_ding
git clone https://github.com/opendilab/huggingface_ding.git
pip3 install -e ./huggingface_ding/
# install environment dependencies if needed
pip3 install DI-engine[common_env]
```
</details>
### Git Clone from Huggingface and Run the Model
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import PPOOffPolicyAgent
from ding.config import Config
from easydict import EasyDict
import torch
# Pull model from files which are git cloned from huggingface
policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu"))
cfg = EasyDict(Config.file_to_dict("policy_config.py"))
# Instantiate the agent
agent = PPOOffPolicyAgent(
env="PongNoFrameskip", exp_name="PongNoFrameskip-v4-PPOOffPolicy", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
### Run Model by Using Huggingface_ding
<details close>
<summary>(Click for Details)</summary>
```shell
# running with trained model
python3 -u run.py
```
**run.py**
```python
from ding.bonus import PPOOffPolicyAgent
from huggingface_ding import pull_model_from_hub
# Pull model from Hugggingface hub
policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/PongNoFrameskip-v4-PPOOffPolicy")
# Instantiate the agent
agent = PPOOffPolicyAgent(
env="PongNoFrameskip", exp_name="PongNoFrameskip-v4-PPOOffPolicy", cfg=cfg.exp_config, policy_state_dict=policy_state_dict
)
# Continue training
agent.train(step=5000)
# Render the new agent performance
agent.deploy(enable_save_replay=True)
```
</details>
## Model Training
### Train the Model and Push to Huggingface_hub
<details close>
<summary>(Click for Details)</summary>
```shell
#Training Your Own Agent
python3 -u train.py
```
**train.py**
```python
from ding.bonus import PPOOffPolicyAgent
from huggingface_ding import push_model_to_hub
# Instantiate the agent
agent = PPOOffPolicyAgent(env="PongNoFrameskip", exp_name="PongNoFrameskip-v4-PPOOffPolicy")
# Train the agent
return_ = agent.train(step=int(10000000))
# Push model to huggingface hub
push_model_to_hub(
agent=agent.best,
env_name="OpenAI/Gym/Atari",
task_name="PongNoFrameskip-v4",
algo_name="PPO",
wandb_url=return_.wandb_url,
github_repo_url="https://github.com/opendilab/DI-engine",
github_doc_model_url="https://di-engine-docs.readthedocs.io/en/latest/12_policies/ppo.html",
github_doc_env_url="https://di-engine-docs.readthedocs.io/en/latest/13_envs/atari.html",
installation_guide="pip3 install DI-engine[common_env]",
usage_file_by_git_clone="./ppo_offpolicy/pong_ppo_offpolicy_deploy.py",
usage_file_by_huggingface_ding="./ppo_offpolicy/pong_ppo_offpolicy_download.py",
train_file="./ppo_offpolicy/pong_ppo_offpolicy.py",
repo_id="OpenDILabCommunity/PongNoFrameskip-v4-PPOOffPolicy"
)
```
</details>
**Configuration**
<details close>
<summary>(Click for Details)</summary>
```python
exp_config = {
'env': {
'manager': {
'episode_num': float("inf"),
'max_retry': 1,
'retry_type': 'reset',
'auto_reset': True,
'step_timeout': None,
'reset_timeout': None,
'retry_waiting_time': 0.1,
'cfg_type': 'BaseEnvManagerDict'
},
'stop_value': 20,
'n_evaluator_episode': 8,
'collector_env_num': 8,
'evaluator_env_num': 8,
'env_id': 'PongNoFrameskip-v4',
'frame_stack': 4
},
'policy': {
'model': {
'obs_shape': [4, 84, 84],
'action_shape': 6,
'action_space': 'discrete',
'encoder_hidden_size_list': [64, 64, 128],
'actor_head_hidden_size': 128,
'critic_head_hidden_size': 128
},
'learn': {
'learner': {
'train_iterations': 1000000000,
'dataloader': {
'num_workers': 0
},
'log_policy': True,
'hook': {
'load_ckpt_before_run': '',
'log_show_after_iter': 100,
'save_ckpt_after_iter': 10000,
'save_ckpt_after_run': True
},
'cfg_type': 'BaseLearnerDict'
},
'update_per_collect': 10,
'batch_size': 320,
'learning_rate': 0.0003,
'value_weight': 0.5,
'entropy_weight': 0.001,
'clip_ratio': 0.2,
'adv_norm': True,
'ignore_done': False,
'grad_clip_type': 'clip_norm',
'grad_clip_value': 0.5
},
'collect': {
'collector': {},
'unroll_len': 1,
'discount_factor': 0.99,
'gae_lambda': 0.95,
'n_sample': 3200
},
'eval': {
'evaluator': {
'eval_freq': 1000,
'render': {
'render_freq': -1,
'mode': 'train_iter'
},
'cfg_type': 'InteractionSerialEvaluatorDict',
'stop_value': 20,
'n_episode': 8
}
},
'other': {
'replay_buffer': {
'replay_buffer_size': 10000
}
},
'on_policy': False,
'cuda': True,
'multi_gpu': False,
'bp_update_sync': True,
'traj_len_inf': False,
'type': 'ppo',
'priority': False,
'priority_IS_weight': False,
'nstep_return': False,
'nstep': 3,
'transition_with_policy_data': True,
'cfg_type': 'PPOOffPolicyDict',
'recompute_adv': True,
'action_space': 'discrete'
},
'exp_name': 'PongNoFrameskip-v4-PPOOffPolicy',
'wandb_logger': {
'gradient_logger': True,
'video_logger': True,
'plot_logger': True,
'action_logger': True,
'return_logger': False
},
'seed': 0
}
```
</details>
**Training Procedure**
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- **Weights & Biases (wandb):** [monitor link](https://wandb.ai/zjowowen/PongNoFrameskip-v4-PPOOffPolicy)
## Model Information
<!-- Provide the basic links for the model. -->
- **Github Repository:** [repo link](https://github.com/opendilab/DI-engine)
- **Doc**: [DI-engine-docs Algorithm link](https://di-engine-docs.readthedocs.io/en/latest/12_policies/ppo.html)
- **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/PongNoFrameskip-v4-PPOOffPolicy/blob/main/policy_config.py)
- **Demo:** [video](https://huggingface.co/OpenDILabCommunity/PongNoFrameskip-v4-PPOOffPolicy/blob/main/replay.mp4)
<!-- Provide the size information for the model. -->
- **Parameters total size:** 11501.55 KB
- **Last Update Date:** 2023-06-25
## Environments
<!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. -->
- **Benchmark:** OpenAI/Gym/Atari
- **Task:** PongNoFrameskip-v4
- **Gym version:** 0.25.1
- **DI-engine version:** v0.4.8
- **PyTorch version:** 1.7.1
- **Doc**: [DI-engine-docs Environments link](https://di-engine-docs.readthedocs.io/en/latest/13_envs/atari.html)
|
binwang/faceval_bart_large_samsum
|
binwang
| 2023-06-25T12:46:05Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T11:50:20Z |
# This is the BART-large mdoel trained on SAMSum dataset
|
tabtoyou/KoLLaVA-KoVicuna-7b
|
tabtoyou
| 2023-06-25T12:32:58Z | 97 | 13 |
transformers
|
[
"transformers",
"pytorch",
"llava",
"text-generation",
"LLaVA",
"KoVicuna",
"KoLLaVA",
"KoAlpaca",
"CLIP",
"ko",
"dataset:tabtoyou/KoLLaVA-Instruct-150k",
"dataset:tabtoyou/KoLLaVA-CC3M-Pretrain-595K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-12T09:14:54Z |
---
license: apache-2.0
datasets:
- tabtoyou/KoLLaVA-Instruct-150k
- tabtoyou/KoLLaVA-CC3M-Pretrain-595K
language:
- ko
library_name: transformers
tags:
- LLaVA
- KoVicuna
- KoLLaVA
- KoAlpaca
- CLIP
---
# KoLLaVA : Korean Large Language and Vision Assistant (feat. LLaVA)
This model is a large multimodal model (LMM) that combines the LLM([KoVicuna](https://huggingface.co/junelee/ko_vicuna_7b)) with visual encoder of CLIP([ViT-14](https://huggingface.co/openai/clip-vit-large-patch14)), trained on [Korean visual-instruction dataset](https://huggingface.co/datasets/tabtoyou/KoLLaVA-Instruct-150k).
Detail codes are available at [KoLLaVA github repository](https://github.com/tabtoyou/KoLLaVA)
### Training hyperparameters
* learning rate : 2e-5
* train_batch_size: 16
* distributed_type: multi-GPU (A100 80G)
* num_devices: 4
* gradient_accumulation_steps: 1
* total_train_batch_size: 64
* total_eval_batch_size: 16
* lr_scheduler_type: cosine
* num_epochs: 1
Model License: Apache License 2.0
|
ahishamm/vit-base-HAM-10000-sharpened-large-patch-32
|
ahishamm
| 2023-06-25T12:32:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T11:51:12Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-HAM-10000-sharpened-large-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-HAM-10000-sharpened-large-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/HAM_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4582
- Accuracy: 0.8404
- Recall: 0.8404
- F1: 0.8404
- Precision: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.6739 | 0.2 | 100 | 0.7775 | 0.7257 | 0.7257 | 0.7257 | 0.7257 |
| 0.6922 | 0.4 | 200 | 0.6455 | 0.7711 | 0.7711 | 0.7711 | 0.7711 |
| 0.8219 | 0.6 | 300 | 0.7582 | 0.7426 | 0.7426 | 0.7426 | 0.7426 |
| 0.6801 | 0.8 | 400 | 0.6363 | 0.7651 | 0.7651 | 0.7651 | 0.7651 |
| 0.5499 | 1.0 | 500 | 0.6231 | 0.7751 | 0.7751 | 0.7751 | 0.7751 |
| 0.5156 | 1.2 | 600 | 0.6399 | 0.7761 | 0.7761 | 0.7761 | 0.7761 |
| 0.4478 | 1.4 | 700 | 0.5324 | 0.8020 | 0.8020 | 0.8020 | 0.8020 |
| 0.4364 | 1.6 | 800 | 0.5597 | 0.7970 | 0.7970 | 0.7970 | 0.7970 |
| 0.4545 | 1.8 | 900 | 0.5212 | 0.8115 | 0.8115 | 0.8115 | 0.8115 |
| 0.4294 | 2.0 | 1000 | 0.4926 | 0.8264 | 0.8264 | 0.8264 | 0.8264 |
| 0.135 | 2.2 | 1100 | 0.5448 | 0.8204 | 0.8204 | 0.8204 | 0.8204 |
| 0.2628 | 2.4 | 1200 | 0.4916 | 0.8304 | 0.8304 | 0.8304 | 0.8304 |
| 0.2577 | 2.59 | 1300 | 0.4582 | 0.8404 | 0.8404 | 0.8404 | 0.8404 |
| 0.2093 | 2.79 | 1400 | 0.5079 | 0.8344 | 0.8344 | 0.8344 | 0.8344 |
| 0.1415 | 2.99 | 1500 | 0.4760 | 0.8439 | 0.8439 | 0.8439 | 0.8439 |
| 0.0686 | 3.19 | 1600 | 0.5379 | 0.8444 | 0.8444 | 0.8444 | 0.8444 |
| 0.1031 | 3.39 | 1700 | 0.5572 | 0.8384 | 0.8384 | 0.8384 | 0.8384 |
| 0.102 | 3.59 | 1800 | 0.5343 | 0.8464 | 0.8464 | 0.8464 | 0.8464 |
| 0.0531 | 3.79 | 1900 | 0.5482 | 0.8479 | 0.8479 | 0.8479 | 0.8479 |
| 0.0756 | 3.99 | 2000 | 0.5454 | 0.8454 | 0.8454 | 0.8454 | 0.8454 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
PhongLe1311/mt5-small-finetuned-amazon-en-es
|
PhongLe1311
| 2023-06-25T12:31:02Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-21T05:32:02Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0340
- Rouge1: 17.3066
- Rouge2: 8.5372
- Rougel: 16.9577
- Rougelsum: 17.1267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 7.0197 | 1.0 | 1209 | 3.3037 | 13.7225 | 5.4609 | 13.1771 | 13.2052 |
| 3.9145 | 2.0 | 2418 | 3.1418 | 15.6039 | 7.5306 | 14.9366 | 14.865 |
| 3.5987 | 3.0 | 3627 | 3.0970 | 17.425 | 8.6602 | 16.9049 | 17.0042 |
| 3.4274 | 4.0 | 4836 | 3.0672 | 16.7739 | 8.0707 | 16.2041 | 16.2127 |
| 3.3241 | 5.0 | 6045 | 3.0648 | 16.6489 | 8.2121 | 16.3527 | 16.4147 |
| 3.2468 | 6.0 | 7254 | 3.0444 | 17.3052 | 8.6923 | 16.9398 | 17.0233 |
| 3.2116 | 7.0 | 8463 | 3.0370 | 17.563 | 8.7613 | 17.1755 | 17.3348 |
| 3.1821 | 8.0 | 9672 | 3.0340 | 17.3066 | 8.5372 | 16.9577 | 17.1267 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
emilianJR/HRA_hyperrealism_art
|
emilianJR
| 2023-06-25T12:30:23Z | 52 | 2 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-25T12:20:01Z |
---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Diffuser model for this SD checkpoint:
https://civitai.com/models/80515/hrahyperrealism-art
**emilianJR/HRA_hyperrealism_art** is the HuggingFace diffuser that you can use with **diffusers.StableDiffusionPipeline()**.
Examples | Examples | Examples
---- | ---- | ----
 |  | 
 |  | 
-------
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "emilianJR/HRA_hyperrealism_art"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "YOUR PROMPT"
image = pipe(prompt).images[0]
image.save("image.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
Tri1/opus-mt-en-ro-finetuned-eng-to-para
|
Tri1
| 2023-06-25T12:21:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T09:20:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-eng-to-para
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-eng-to-para
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0821
- Bleu: 22.2055
- Gen Len: 21.7942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.0865 | 1.0 | 6250 | 0.0821 | 22.2055 | 21.7942 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
gb16001/sovits4.1_ATRI
|
gb16001
| 2023-06-25T12:03:50Z | 0 | 2 | null |
[
"dataset:Yusen/Sovits_ATRI",
"license:agpl-3.0",
"region:us"
] | null | 2023-06-25T10:08:35Z |
---
license: agpl-3.0
datasets:
- Yusen/Sovits_ATRI
---
### abstruct
"speech_encoder": "vec768l12".
more trainning paramaters please find in ATRI_config.json
sovits,diffusion,kmeans moddels included, take it as you need.
### performance
a vocal only demo is in the folder.
|
sxndypz/rvc-v1-models
|
sxndypz
| 2023-06-25T11:57:38Z | 0 | 0 | null |
[
"RVC v1",
"audio-to-audio",
"ja",
"license:openrail",
"region:us"
] |
audio-to-audio
| 2023-06-25T11:53:04Z |
---
license: openrail
language:
- ja
pipeline_tag: audio-to-audio
tags:
- RVC v1
---
|
ahishamm/vit-base-HAM-10000-sharpened-large-patch-16
|
ahishamm
| 2023-06-25T11:49:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T10:38:43Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-HAM-10000-sharpened-large-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-HAM-10000-sharpened-large-patch-16
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the ahishamm/HAM_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5504
- Accuracy: 0.8075
- Recall: 0.8075
- F1: 0.8075
- Precision: 0.8075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.9294 | 0.2 | 100 | 1.0377 | 0.6733 | 0.6733 | 0.6733 | 0.6733 |
| 1.0067 | 0.4 | 200 | 0.8976 | 0.6813 | 0.6813 | 0.6813 | 0.6813 |
| 1.0081 | 0.6 | 300 | 0.9345 | 0.6773 | 0.6773 | 0.6773 | 0.6773 |
| 0.9326 | 0.8 | 400 | 0.8494 | 0.6883 | 0.6883 | 0.6883 | 0.6883 |
| 0.8243 | 1.0 | 500 | 0.7481 | 0.7267 | 0.7267 | 0.7267 | 0.7267 |
| 0.7408 | 1.2 | 600 | 0.7277 | 0.7317 | 0.7317 | 0.7317 | 0.7317 |
| 0.6844 | 1.4 | 700 | 0.7114 | 0.7392 | 0.7392 | 0.7392 | 0.7392 |
| 0.7411 | 1.6 | 800 | 0.6772 | 0.7416 | 0.7416 | 0.7416 | 0.7416 |
| 0.7138 | 1.8 | 900 | 0.7136 | 0.7377 | 0.7377 | 0.7377 | 0.7377 |
| 0.5838 | 2.0 | 1000 | 0.6625 | 0.7521 | 0.7521 | 0.7521 | 0.7521 |
| 0.5315 | 2.2 | 1100 | 0.6104 | 0.7776 | 0.7776 | 0.7776 | 0.7776 |
| 0.6391 | 2.4 | 1200 | 0.6317 | 0.7591 | 0.7591 | 0.7591 | 0.7591 |
| 0.6903 | 2.59 | 1300 | 0.6098 | 0.7656 | 0.7656 | 0.7656 | 0.7656 |
| 0.5798 | 2.79 | 1400 | 0.6211 | 0.7751 | 0.7751 | 0.7751 | 0.7751 |
| 0.5448 | 2.99 | 1500 | 0.5824 | 0.7820 | 0.7820 | 0.7820 | 0.7820 |
| 0.4523 | 3.19 | 1600 | 0.5951 | 0.7776 | 0.7776 | 0.7776 | 0.7776 |
| 0.4485 | 3.39 | 1700 | 0.6114 | 0.7815 | 0.7815 | 0.7815 | 0.7815 |
| 0.487 | 3.59 | 1800 | 0.5730 | 0.7950 | 0.7950 | 0.7950 | 0.7950 |
| 0.4104 | 3.79 | 1900 | 0.5597 | 0.7965 | 0.7965 | 0.7965 | 0.7965 |
| 0.4468 | 3.99 | 2000 | 0.5504 | 0.8075 | 0.8075 | 0.8075 | 0.8075 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Smaraa/bart-text-simplification_1e4_adafactor
|
Smaraa
| 2023-06-25T11:45:02Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-24T11:26:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-text-simplification_1e4_adafactor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-text-simplification_1e4_adafactor
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8377
- Rouge1: 60.5348
- Rouge2: 41.6762
- Rougel: 55.5994
- Rougelsum: 55.5841
- Gen Len: 18.7487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.1741 | 1.0 | 1163 | 0.6416 | 62.4 | 44.1316 | 57.9029 | 57.8644 | 18.8482 |
| 0.1553 | 2.0 | 2326 | 0.6504 | 62.2879 | 43.9281 | 57.4714 | 57.461 | 18.8063 |
| 0.1369 | 3.0 | 3489 | 0.6656 | 61.2481 | 42.605 | 56.5118 | 56.4636 | 18.733 |
| 0.1286 | 4.0 | 4652 | 0.6906 | 61.3015 | 42.1608 | 56.2688 | 56.1707 | 18.7487 |
| 0.1141 | 5.0 | 5815 | 0.7082 | 62.1771 | 43.1481 | 57.0231 | 57.0673 | 18.911 |
| 0.1016 | 6.0 | 6978 | 0.7188 | 61.408 | 42.2759 | 56.1699 | 56.1779 | 18.8377 |
| 0.0961 | 7.0 | 8141 | 0.7334 | 60.802 | 41.9149 | 56.0171 | 56.0279 | 18.8168 |
| 0.0869 | 8.0 | 9304 | 0.7509 | 60.6564 | 41.3587 | 55.4436 | 55.468 | 18.7382 |
| 0.0783 | 9.0 | 10467 | 0.7713 | 60.3551 | 41.8074 | 55.6856 | 55.679 | 18.7173 |
| 0.0751 | 10.0 | 11630 | 0.7785 | 60.378 | 41.6134 | 55.5217 | 55.505 | 18.8325 |
| 0.0679 | 11.0 | 12793 | 0.7835 | 60.5835 | 41.6735 | 55.5469 | 55.5791 | 18.7435 |
| 0.0619 | 12.0 | 13956 | 0.8012 | 60.8152 | 41.2014 | 55.7186 | 55.7233 | 18.9424 |
| 0.0611 | 13.0 | 15119 | 0.8091 | 60.8188 | 41.8074 | 55.6684 | 55.8026 | 18.7958 |
| 0.0568 | 14.0 | 16282 | 0.8175 | 60.9209 | 41.5689 | 55.8838 | 55.8642 | 18.7277 |
| 0.0527 | 15.0 | 17445 | 0.8250 | 61.0215 | 41.9079 | 55.9018 | 55.8709 | 18.9162 |
| 0.0524 | 16.0 | 18608 | 0.8317 | 60.8214 | 41.6554 | 55.8053 | 55.7947 | 18.7277 |
| 0.0504 | 17.0 | 19771 | 0.8310 | 60.6533 | 41.6507 | 55.9289 | 55.9426 | 18.7958 |
| 0.0486 | 18.0 | 20934 | 0.8345 | 60.4722 | 41.5319 | 55.3384 | 55.3655 | 18.6859 |
| 0.0491 | 19.0 | 22097 | 0.8379 | 60.4012 | 41.2452 | 55.5059 | 55.5553 | 18.8115 |
| 0.0489 | 20.0 | 23260 | 0.8377 | 60.5348 | 41.6762 | 55.5994 | 55.5841 | 18.7487 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
PraveenJesu/openai-whisper-medium-peft-lora-colab
|
PraveenJesu
| 2023-06-25T11:43:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T11:43:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Erfan2001/distilbert_NoTokenized
|
Erfan2001
| 2023-06-25T11:43:35Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-24T22:00:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xxx
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xxx
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6856
- Accuracy: 0.7758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7996 | 1.0 | 4284 | 0.7921 | 0.7287 |
| 0.5539 | 2.0 | 8568 | 0.6856 | 0.7758 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
edfryo/bangkelser
|
edfryo
| 2023-06-25T11:39:27Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-05-09T11:58:00Z |
---
license: bigscience-openrail-m
---
|
KelvinHu/ppo-Huggy
|
KelvinHu
| 2023-06-25T10:50:21Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-25T09:44:43Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: KelvinHu/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
czz23/journal-setfit-model
|
czz23
| 2023-06-25T10:37:43Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-25T10:34:44Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/hy/pfb50fjs4zd8cznz_yjwyw8w0000gp/T/tmpg6l_fkqj/czz23/journal-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/hy/pfb50fjs4zd8cznz_yjwyw8w0000gp/T/tmpg6l_fkqj/czz23/journal-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
siddh4rth/fintuned-falcon-7b-truthful-qa
|
siddh4rth
| 2023-06-25T10:36:25Z | 4 | 0 |
peft
|
[
"peft",
"RefinedWebModel",
"custom_code",
"4-bit",
"region:us"
] | null | 2023-06-25T09:46:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
jiyuanq/falcon-40b-instruct-gptq-128g-act
|
jiyuanq
| 2023-06-25T10:35:13Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"RefinedWeb",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T08:31:32Z |
---
library_name: transformers
pipeline_tag: text-generation
---
falcon-40b-instruct quantized with GPTQ using the script in https://github.com/huggingface/text-generation-inference/pull/438
- group size: 128
- act order: true
- nsamples: 128
- dataset: wikitext2
|
ahishamm/vit-base-HAM-10000-sharpened-patch-32
|
ahishamm
| 2023-06-25T10:35:04Z | 192 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T10:06:47Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-HAM-10000-sharpened-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-HAM-10000-sharpened-patch-32
This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the ahishamm/HAM_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4806
- Accuracy: 0.8369
- Recall: 0.8369
- F1: 0.8369
- Precision: 0.8369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.8099 | 0.2 | 100 | 0.8060 | 0.7247 | 0.7247 | 0.7247 | 0.7247 |
| 0.7437 | 0.4 | 200 | 0.7020 | 0.7541 | 0.7541 | 0.7541 | 0.7541 |
| 0.7982 | 0.6 | 300 | 0.7352 | 0.7411 | 0.7411 | 0.7411 | 0.7411 |
| 0.7646 | 0.8 | 400 | 0.6603 | 0.7626 | 0.7626 | 0.7626 | 0.7626 |
| 0.6141 | 1.0 | 500 | 0.6373 | 0.7771 | 0.7771 | 0.7771 | 0.7771 |
| 0.5934 | 1.2 | 600 | 0.6141 | 0.7820 | 0.7820 | 0.7820 | 0.7820 |
| 0.5524 | 1.4 | 700 | 0.5621 | 0.8030 | 0.8030 | 0.8030 | 0.8030 |
| 0.5057 | 1.6 | 800 | 0.6074 | 0.7855 | 0.7855 | 0.7855 | 0.7855 |
| 0.5519 | 1.8 | 900 | 0.5486 | 0.7990 | 0.7990 | 0.7990 | 0.7990 |
| 0.4784 | 2.0 | 1000 | 0.5382 | 0.8060 | 0.8060 | 0.8060 | 0.8060 |
| 0.2592 | 2.2 | 1100 | 0.5237 | 0.8165 | 0.8165 | 0.8165 | 0.8165 |
| 0.3872 | 2.4 | 1200 | 0.5345 | 0.8120 | 0.8120 | 0.8120 | 0.8120 |
| 0.2506 | 2.59 | 1300 | 0.5061 | 0.8214 | 0.8214 | 0.8214 | 0.8214 |
| 0.2907 | 2.79 | 1400 | 0.4940 | 0.8354 | 0.8354 | 0.8354 | 0.8354 |
| 0.2436 | 2.99 | 1500 | 0.4806 | 0.8369 | 0.8369 | 0.8369 | 0.8369 |
| 0.1472 | 3.19 | 1600 | 0.5231 | 0.8219 | 0.8219 | 0.8219 | 0.8219 |
| 0.1441 | 3.39 | 1700 | 0.5452 | 0.8329 | 0.8329 | 0.8329 | 0.8329 |
| 0.1327 | 3.59 | 1800 | 0.5410 | 0.8354 | 0.8354 | 0.8354 | 0.8354 |
| 0.0615 | 3.79 | 1900 | 0.5473 | 0.8424 | 0.8424 | 0.8424 | 0.8424 |
| 0.0943 | 3.99 | 2000 | 0.5490 | 0.8409 | 0.8409 | 0.8409 | 0.8409 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
97jmlr/a2c-PandaReachDense-v2
|
97jmlr
| 2023-06-25T10:27:45Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T10:26:44Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.79 +/- 1.05
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sp1786/mutliclass-sentiment-analysis-bert
|
Sp1786
| 2023-06-25T10:22:55Z | 4 | 0 |
transformers
|
[
"transformers",
"bert",
"code",
"text-classification",
"en",
"dataset:Sp1786/multiclass-sentiment-analysis-dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-21T11:23:59Z |
---
license: apache-2.0
datasets:
- Sp1786/multiclass-sentiment-analysis-dataset
language:
- en
metrics:
- bleu
- sacrebleu
library_name: transformers
pipeline_tag: text-classification
tags:
- code
---
|
kbondar17/test-trainer
|
kbondar17
| 2023-06-25T10:12:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-25T10:06:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: test-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4009
- F1: 0.6363
- Roc Auc: 0.7682
- Accuracy: 0.6079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 125 | 0.2975 | 0.5710 | 0.7129 | 0.4693 |
| No log | 2.0 | 250 | 0.3742 | 0.6226 | 0.7621 | 0.6013 |
| No log | 3.0 | 375 | 0.4009 | 0.6363 | 0.7682 | 0.6079 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dhruvil237/userutterance_classification_verplus
|
dhruvil237
| 2023-06-25T10:05:26Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"doi:10.57967/hf/0811",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-05T12:20:52Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: userutterance_classification_verplus
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9619354838709677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# userutterance_classification_verplus
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.9619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.0219 | 0.21 | 200 | 4.9813 | 0.0077 |
| 4.8915 | 0.42 | 400 | 4.5741 | 0.1155 |
| 4.2736 | 0.63 | 600 | 3.5359 | 0.4719 |
| 3.2701 | 0.84 | 800 | 2.4291 | 0.7429 |
| 2.3578 | 1.05 | 1000 | 1.5793 | 0.8413 |
| 1.5695 | 1.26 | 1200 | 1.0029 | 0.8994 |
| 1.0412 | 1.47 | 1400 | 0.6475 | 0.9187 |
| 0.7034 | 1.68 | 1600 | 0.4439 | 0.9303 |
| 0.501 | 1.89 | 1800 | 0.3400 | 0.9381 |
| 0.3187 | 2.1 | 2000 | 0.2793 | 0.9439 |
| 0.2185 | 2.31 | 2200 | 0.2538 | 0.9490 |
| 0.1669 | 2.52 | 2400 | 0.2210 | 0.9523 |
| 0.1081 | 2.73 | 2600 | 0.2225 | 0.9519 |
| 0.1004 | 2.94 | 2800 | 0.2136 | 0.9555 |
| 0.0665 | 3.14 | 3000 | 0.2078 | 0.9561 |
| 0.0509 | 3.35 | 3200 | 0.2155 | 0.9568 |
| 0.05 | 3.56 | 3400 | 0.2107 | 0.9581 |
| 0.0527 | 3.77 | 3600 | 0.2171 | 0.9568 |
| 0.0447 | 3.98 | 3800 | 0.2128 | 0.9590 |
| 0.0259 | 4.19 | 4000 | 0.2099 | 0.9587 |
| 0.0279 | 4.4 | 4200 | 0.2179 | 0.9577 |
| 0.0176 | 4.61 | 4400 | 0.2191 | 0.9574 |
| 0.0288 | 4.82 | 4600 | 0.2216 | 0.9590 |
| 0.0328 | 5.03 | 4800 | 0.2237 | 0.9606 |
| 0.0154 | 5.24 | 5000 | 0.2241 | 0.9616 |
| 0.0157 | 5.45 | 5200 | 0.2265 | 0.9603 |
| 0.023 | 5.66 | 5400 | 0.2276 | 0.9613 |
| 0.0178 | 5.87 | 5600 | 0.2270 | 0.9619 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
VilohitT/question_answering_majorproject
|
VilohitT
| 2023-06-25T09:46:03Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T09:46:01Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
bogdancazan/t5-small-newsela-biendata-with-domain-adaptation
|
bogdancazan
| 2023-06-25T09:45:44Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T11:56:49Z |
training_args = TrainingArguments(
output_dir='t5-small-newsela-biendata-with-domain-adaptation',
num_train_epochs=20,
warmup_steps=250,
per_device_train_batch_size=BATCH_SIZE,
weight_decay=0.01,
learning_rate=2e-4,
fp16=True,
optim="adafactor",
)
Step Training Loss
500 35.466600
1000 25.795400
1500 10.923200
2000 4.515500
TrainOutput(global_step=2320, training_loss=16.92537920721646, metrics={'train_runtime': 628.0033, 'train_samples_per_second': 472.418, 'train_steps_per_second': 3.694, 'total_flos': 0.0, 'train_loss': 16.92537920721646, 'epoch': 20.0})
|
lucasbertola/ppo-LunarLander-v2
|
lucasbertola
| 2023-06-25T09:29:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T11:40:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 295.14 +/- 14.94
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sd-concepts-library/pokemon-raichu-sd-model
|
sd-concepts-library
| 2023-06-25T09:26:29Z | 0 | 0 | null |
[
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:mit",
"region:us"
] | null | 2023-06-25T09:26:28Z |
---
license: mit
base_model: stabilityai/stable-diffusion-2
---
### Pokemon Raichu - SD model on Stable Diffusion
This is the `<cat-toy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
tnvmadhav/food_classifier
|
tnvmadhav
| 2023-06-25T09:06:06Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T08:32:22Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tnvmadhav/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tnvmadhav/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4025
- Validation Loss: 0.3368
- Train Accuracy: 0.91
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.8090 | 1.6205 | 0.817 | 0 |
| 1.2350 | 0.8021 | 0.879 | 1 |
| 0.7254 | 0.5466 | 0.899 | 2 |
| 0.5023 | 0.3927 | 0.914 | 3 |
| 0.4025 | 0.3368 | 0.91 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Shridipta-06/LunarLander-v2_unit8part1
|
Shridipta-06
| 2023-06-25T08:50:28Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T08:46:05Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -128.49 +/- 35.10
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Shridipta-06/LunarLander-v2_unit8part1'
'batch_size': 512
'minibatch_size': 128}
```
|
sang-kyung/ckpt
|
sang-kyung
| 2023-06-25T08:21:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-25T07:02:21Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - sang-kyung/ckpt
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
RoundtTble/dinov2_vits14_onnx
|
RoundtTble
| 2023-06-25T08:20:24Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2023-06-24T07:10:50Z |
# dinov2_vits14
## ONNX Model
Check this [PR](https://github.com/facebookresearch/dinov2/pull/129).
## Run
Run triton container.
```
make triton
```
```
docker logs dinov2_vits14_triton
=============================
== Triton Inference Server ==
=============================
NVIDIA Release 23.04 (build 58408265)
Triton Server Version 2.33.0
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
WARNING: CUDA Minor Version Compatibility mode ENABLED.
Using driver version 525.105.17 which has support for CUDA 12.0. This container
was built with CUDA 12.1 and will be run in Minor Version Compatibility mode.
CUDA Forward Compatibility is preferred over Minor Version Compatibility for use
with this container but was unavailable:
[[Forward compatibility was attempted on non supported HW (CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE) cuInit()=804]]
See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.
I0625 08:05:36.712010 1 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7f6c46000000' with size 268435456
I0625 08:05:36.712625 1 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0625 08:05:36.717785 1 model_lifecycle.cc:459] loading: dinov2_vits14:1
I0625 08:05:36.723707 1 onnxruntime.cc:2504] TRITONBACKEND_Initialize: onnxruntime
I0625 08:05:36.723725 1 onnxruntime.cc:2514] Triton TRITONBACKEND API version: 1.12
I0625 08:05:36.723731 1 onnxruntime.cc:2520] 'onnxruntime' TRITONBACKEND API version: 1.12
I0625 08:05:36.723735 1 onnxruntime.cc:2550] backend configuration:
{"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}}
I0625 08:05:36.770311 1 onnxruntime.cc:2608] TRITONBACKEND_ModelInitialize: dinov2_vits14 (version 1)
I0625 08:05:36.770781 1 onnxruntime.cc:666] skipping model configuration auto-complete for 'dinov2_vits14': inputs and outputs already specified
I0625 08:05:36.771205 1 onnxruntime.cc:2651] TRITONBACKEND_ModelInstanceInitialize: dinov2_vits14_0 (GPU device 0)
2023-06-25 08:05:37.157976034 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 465, index: 122, mask: {125, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.158142138 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 466, index: 123, mask: {62, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.158159030 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 467, index: 124, mask: {126, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.158174259 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 468, index: 125, mask: {63, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.165944431 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 344, index: 1, mask: {1, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.158230084 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 469, index: 126, mask: {127, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.169979079 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 347, index: 4, mask: {66, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.169927531 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 345, index: 2, mask: {65, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.169954703 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 346, index: 3, mask: {2, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.173982388 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 350, index: 7, mask: {4, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.173929448 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 348, index: 5, mask: {3, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.173954065 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 349, index: 6, mask: {67, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.181926759 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 351, index: 8, mask: {68, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.185932583 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 352, index: 9, mask: {5, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.189924821 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 353, index: 10, mask: {69, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.193940975 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 464, index: 121, mask: {61, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.194020786 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 357, index: 14, mask: {71, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.193940915 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 354, index: 11, mask: {6, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.193968147 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 355, index: 12, mask: {70, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.193992072 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 356, index: 13, mask: {7, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.197974211 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 360, index: 17, mask: {9, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.197928554 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 358, index: 15, mask: {8, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.197950686 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 359, index: 16, mask: {72, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.201924259 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 361, index: 18, mask: {73, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.205931957 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 362, index: 19, mask: {10, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.209926179 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 363, index: 20, mask: {74, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.213927705 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 364, index: 21, mask: {11, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.217799496 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 365, index: 22, mask: {75, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.217849460 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 366, index: 23, mask: {12, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.221966294 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 367, index: 24, mask: {76, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.221966304 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 463, index: 120, mask: {124, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.225931100 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 462, index: 119, mask: {60, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.225933645 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 368, index: 25, mask: {13, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.229929350 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 369, index: 26, mask: {77, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.233930445 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 370, index: 27, mask: {14, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.233930525 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 461, index: 118, mask: {123, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.237930518 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 371, index: 28, mask: {78, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.241927085 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 372, index: 29, mask: {15, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.245926977 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 373, index: 30, mask: {79, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.249931199 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 374, index: 31, mask: {16, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.253927515 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 375, index: 32, mask: {80, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.257925694 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 376, index: 33, mask: {17, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.261929715 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 377, index: 34, mask: {81, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.265966397 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 378, index: 35, mask: {18, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.269926725 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 379, index: 36, mask: {82, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.273931337 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 380, index: 37, mask: {19, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.281941021 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 381, index: 38, mask: {83, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.282017776 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 398, index: 55, mask: {28, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.282038465 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 382, index: 39, mask: {20, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.282090914 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 383, index: 40, mask: {84, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.286235010 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 385, index: 42, mask: {85, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.285955121 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 401, index: 58, mask: {93, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.282070957 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 399, index: 56, mask: {92, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.286082321 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 384, index: 41, mask: {21, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.285929422 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 400, index: 57, mask: {29, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.293926803 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 405, index: 62, mask: {95, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289931018 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 402, index: 59, mask: {30, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289956767 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 403, index: 60, mask: {94, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.301929004 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 388, index: 45, mask: {23, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289975973 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 404, index: 61, mask: {31, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.294054945 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 406, index: 63, mask: {32, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.294078880 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 407, index: 64, mask: {96, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.314023441 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 409, index: 66, mask: {97, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289931068 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 386, index: 43, mask: {22, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.318030297 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 411, index: 68, mask: {98, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.289956797 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 387, index: 44, mask: {86, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.301929014 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 408, index: 65, mask: {33, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.314096058 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 410, index: 67, mask: {34, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.334030890 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 414, index: 71, mask: {36, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.305931271 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 389, index: 46, mask: {87, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321929038 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 390, index: 47, mask: {24, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321948134 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 391, index: 48, mask: {88, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321965006 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 392, index: 49, mask: {25, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321981437 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 393, index: 50, mask: {89, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.321996396 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 394, index: 51, mask: {26, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322012065 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 395, index: 52, mask: {90, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322026713 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 396, index: 53, mask: {27, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322049907 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 397, index: 54, mask: {91, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322065276 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 460, index: 117, mask: {59, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322080735 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 425, index: 82, mask: {105, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322096315 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 426, index: 83, mask: {42, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322112155 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 427, index: 84, mask: {106, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322127053 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 428, index: 85, mask: {43, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322143324 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 429, index: 86, mask: {107, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322157170 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 430, index: 87, mask: {44, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322173340 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 431, index: 88, mask: {108, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322188569 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 432, index: 89, mask: {45, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322205311 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 433, index: 90, mask: {109, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322219938 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 434, index: 91, mask: {46, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322235177 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 435, index: 92, mask: {110, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322249955 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 436, index: 93, mask: {47, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322267158 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 437, index: 94, mask: {111, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322281345 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 438, index: 95, mask: {48, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322296904 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 439, index: 96, mask: {112, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322312113 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 440, index: 97, mask: {49, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322329005 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 441, index: 98, mask: {113, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322343652 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 442, index: 99, mask: {50, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322359492 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 443, index: 100, mask: {114, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322377907 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 444, index: 101, mask: {51, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322393366 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 445, index: 102, mask: {115, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322408725 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 446, index: 103, mask: {52, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322423233 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 447, index: 104, mask: {116, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322437289 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 448, index: 105, mask: {53, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322453440 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 449, index: 106, mask: {117, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322467697 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 450, index: 107, mask: {54, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322483076 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 451, index: 108, mask: {118, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322496812 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 452, index: 109, mask: {55, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.445929743 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 417, index: 74, mask: {101, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322511880 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 453, index: 110, mask: {119, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322525526 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 454, index: 111, mask: {56, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322541977 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 455, index: 112, mask: {120, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.454013818 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 422, index: 79, mask: {40, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322555663 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 456, index: 113, mask: {57, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.457932126 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 423, index: 80, mask: {104, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322571683 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 457, index: 114, mask: {121, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.322585920 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 458, index: 115, mask: {58, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.318158029 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 412, index: 69, mask: {35, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.334163851 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 415, index: 72, mask: {100, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.341919085 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 416, index: 73, mask: {37, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.323408365 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 413, index: 70, mask: {99, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.453923387 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 418, index: 75, mask: {38, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.453947493 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 419, index: 76, mask: {102, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.453965727 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 420, index: 77, mask: {39, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.453991656 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 421, index: 78, mask: {103, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.458087059 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 424, index: 81, mask: {41, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:37.585007204 [E:onnxruntime:log, env.cc:251 ThreadMain] pthread_setaffinity_np failed for thread: 459, index: 116, mask: {122, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2023-06-25 08:05:38.570069572 [W:onnxruntime:, session_state.cc:1136 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
2023-06-25 08:05:38.570088387 [W:onnxruntime:, session_state.cc:1138 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
I0625 08:05:39.975559 1 model_lifecycle.cc:694] successfully loaded 'dinov2_vits14' version 1
I0625 08:05:39.975625 1 server.cc:583]
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+
I0625 08:05:39.975662 1 server.cc:610]
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Backend | Path | Config |
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so | {"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.000000","default-max-batch-size":"4"}} |
+-------------+-----------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0625 08:05:39.975683 1 server.cc:653]
+---------------+---------+--------+
| Model | Version | Status |
+---------------+---------+--------+
| dinov2_vits14 | 1 | READY |
+---------------+---------+--------+
I0625 08:05:39.991510 1 metrics.cc:808] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3090
I0625 08:05:39.992145 1 metrics.cc:701] Collecting CPU metrics
I0625 08:05:39.992360 1 tritonserver.cc:2387]
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option | Value |
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.33.0 |
| server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data parameters statistics trace logging |
| model_repository_path[0] | /models |
| model_control_mode | MODE_NONE |
| strict_model_config | 0 |
| rate_limit | OFF |
| pinned_memory_pool_byte_size | 268435456 |
| cuda_memory_pool_byte_size{0} | 67108864 |
| min_supported_compute_capability | 6.0 |
| strict_readiness | 1 |
| exit_timeout | 30 |
| cache_enabled | 0 |
+----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
I0625 08:05:39.993603 1 grpc_server.cc:2450] Started GRPCInferenceService at 0.0.0.0:8001
I0625 08:05:39.993771 1 http_server.cc:3555] Started HTTPService at 0.0.0.0:8000
I0625 08:05:40.034678 1 http_server.cc:185] Started Metrics Service at 0.0.0.0:8002
```
Perf analyzer `dinov2_vits14`
```
make perf
```
```
docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:23.04-py3-sdk perf_analyzer -m dinov2_vits14 --percentile=95 -i grpc -u 0.0.0.0:8001 --concurrency-range 16:16 --shape input:3,280,280
=================================
== Triton Inference Server SDK ==
=================================
NVIDIA Release 23.04 (build 58408269)
Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
WARNING: CUDA Minor Version Compatibility mode ENABLED.
Using driver version 525.105.17 which has support for CUDA 12.0. This container
was built with CUDA 12.1 and will be run in Minor Version Compatibility mode.
CUDA Forward Compatibility is preferred over Minor Version Compatibility for use
with this container but was unavailable:
[[Forward compatibility was attempted on non supported HW (CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE) cuInit()=804]]
See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.
*** Measurement Settings ***
Batch size: 1
Service Kind: Triton
Using "time_windows" mode for stabilization
Measurement window: 5000 msec
Latency limit: 0 msec
Concurrency limit: 16 concurrent requests
Using synchronous calls for inference
Stabilizing using p95 latency
Request concurrency: 16
Client:
Request count: 9403
Throughput: 522.33 infer/sec
p50 latency: 30482 usec
p90 latency: 32100 usec
p95 latency: 32564 usec
p99 latency: 34203 usec
Avg gRPC time: 30589 usec ((un)marshal request/response 93 usec + response wait 30496 usec)
Server:
Inference count: 9403
Execution count: 1177
Successful request count: 9403
Avg request latency: 24295 usec (overhead 220 usec + queue 9042 usec + compute input 1511 usec + compute infer 13485 usec + compute output 37 usec)
Inferences/Second vs. Client p95 Batch Latency
Concurrency: 16, throughput: 522.33 infer/sec, latency 32564 usec
```
|
joohwan/whisper-small-gd
|
joohwan
| 2023-06-25T08:10:27Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-25T05:51:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-gd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-gd
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1180
- Wer: 14.2298
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0723 | 0.09 | 250 | 0.2013 | 22.6924 |
| 0.044 | 0.18 | 500 | 0.1826 | 27.3905 |
| 0.1209 | 0.27 | 750 | 0.1705 | 27.2700 |
| 0.0973 | 0.36 | 1000 | 0.1462 | 15.1182 |
| 0.0941 | 0.45 | 1250 | 0.1322 | 15.6603 |
| 0.076 | 0.54 | 1500 | 0.1258 | 18.3557 |
| 0.0967 | 0.63 | 1750 | 0.1203 | 14.8020 |
| 0.0757 | 0.72 | 2000 | 0.1180 | 14.2298 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Lajonbot/polish-alpaca7B-lora
|
Lajonbot
| 2023-06-25T07:41:13Z | 0 | 0 | null |
[
"tensorboard",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:openrail",
"region:us"
] | null | 2023-05-01T07:08:31Z |
---
license: openrail
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
language:
- pl
---
|
Lajonbot/falcon-7b-instruct-pl-lora
|
Lajonbot
| 2023-06-25T07:38:22Z | 0 | 0 | null |
[
"tensorboard",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:openrail",
"region:us"
] | null | 2023-06-12T06:13:24Z |
---
license: openrail
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
language:
- pl
---
|
Lajonbot/lamini-instruct-tuned-3b-pl-lora
|
Lajonbot
| 2023-06-25T07:37:46Z | 0 | 0 | null |
[
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:openrail",
"region:us"
] | null | 2023-06-15T06:08:17Z |
---
license: openrail
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
language:
- pl
---
|
Lajonbot/stablelm-base-alpha-3b-instruct-pl-lora
|
Lajonbot
| 2023-06-25T07:37:23Z | 0 | 0 | null |
[
"tensorboard",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:openrail",
"region:us"
] | null | 2023-06-15T06:13:44Z |
---
license: openrail
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
language:
- pl
---
|
Davlan/xlm-roberta-base-wikiann-ner
|
Davlan
| 2023-06-25T07:32:38Z | 158 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- ar
- as
- bn
- ca
- en
- es
- eu
- fr
- gu
- hi
- id
- ig
- mr
- pa
- pt
- sw
- ur
- vi
- yo
- zh
- multilingual
datasets:
- wikiann
---
# xlm-roberta-base-wikiann-ner
## Model description
**xlm-roberta-base-wikiann-ner** is the first **Named Entity Recognition** model for 20 languages (Arabic, Assamese, Bengali, Catalan, English, Spanish, Basque, French, Gujarati, Hindi, Indonesia, Igbo, Marathi, Punjabi, Portugues and Swahili, Urdu, Vietnamese, Yoruba, Chinese) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of languages datasets obtained from [WikiANN](https://huggingface.co/datasets/wikiann) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base-wikiann-ner")
model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-base-wikiann-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Ìbọn ń ró kù kù gẹ́gẹ́ bí ọwọ́ ọ̀pọ̀ aráàlù ṣe tẹ ìbọn ní Kyiv láti dojú kọ Russia"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 20 NER datasets (Arabic, Assamese, Bengali, Catalan, English, Spanish, Basque, French, Gujarati, Hindi, Indonesia, Igbo, Marathi, Punjabi, Portugues and Swahili, Urdu, Vietnamese, Yoruba, Chinese)[wikiann](https://huggingface.co/datasets/wikiann).
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
### BibTeX entry and citation info
```
|
Davlan/xlm-roberta-base-finetuned-swahili
|
Davlan
| 2023-06-25T07:31:57Z | 119 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language: sw
datasets:
---
# xlm-roberta-base-finetuned-swahili
## Model description
**xlm-roberta-base-finetuned-swahili** is a **Swahili RoBERTa** model obtained by fine-tuning **xlm-roberta-base** model on Swahili language texts. It provides **better performance** than the XLM-RoBERTa on text classification and named entity recognition datasets.
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on Swahili corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/xlm-roberta-base-finetuned-swahili')
>>> unmasker("Jumatatu, Bwana Kagame alielezea shirika la France24 huko <mask> kwamba hakuna uhalifu ulitendwa")
[{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Ufaransa kwamba hakuna uhalifu ulitendwa',
'score': 0.5077782273292542,
'token': 190096,
'token_str': 'Ufaransa'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Paris kwamba hakuna uhalifu ulitendwa',
'score': 0.3657738268375397,
'token': 7270,
'token_str': 'Paris'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Gabon kwamba hakuna uhalifu ulitendwa',
'score': 0.01592041552066803,
'token': 176392,
'token_str': 'Gabon'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko France kwamba hakuna uhalifu ulitendwa',
'score': 0.010881908237934113,
'token': 9942,
'token_str': 'France'},
{'sequence': 'Jumatatu, Bwana Kagame alielezea shirika la France24 huko Marseille kwamba hakuna uhalifu ulitendwa',
'score': 0.009554869495332241,
'token': 185918,
'token_str': 'Marseille'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Swahili CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| XLM-R F1 | sw_roberta F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 87.55 | 89.46
### BibTeX entry and citation info
By David Adelani
```
```
|
boleshirish/Marathi_GPT2_Pretrained
|
boleshirish
| 2023-06-25T07:29:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"mr",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T06:29:25Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Marathi_GPT2_Pretrained
results: []
language:
- mr
metrics:
- accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Marathi_GPT2_Pretrained
- Loss: 1.8264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8893 | 0.27 | 500 | 2.5366 |
| 2.3286 | 0.53 | 1000 | 2.1366 |
| 2.005 | 0.8 | 1500 | 1.8264 |
### Framework versions
- Transformers 4.18.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.12.1
|
Davlan/xlm-roberta-base-finetuned-xhosa
|
Davlan
| 2023-06-25T07:14:21Z | 171 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
Davlan/xlm-roberta-large-finetuned-igbo
|
Davlan
| 2023-06-25T07:13:52Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-25T18:59:29Z |
---
tags:
- generated_from_trainer
model-index:
- name: ibo_xlmr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ibo_xlmr
This model is a fine-tuned version of [models/ibo_xlmr/](https://huggingface.co/models/ibo_xlmr/) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9762
- eval_runtime: 31.9667
- eval_samples_per_second: 32.471
- eval_steps_per_second: 4.067
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.21.1
- Pytorch 1.7.1+cu110
- Datasets 1.16.1
- Tokenizers 0.12.1
|
Davlan/xlm-roberta-base-finetuned-english
|
Davlan
| 2023-06-25T07:13:11Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
---
|
Davlan/xlm-roberta-large-masakhaner
|
Davlan
| 2023-06-25T07:12:21Z | 135 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm-roberta",
"token-classification",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- amh
- hau
- ibo
- kin
- lug
- luo
- pcm
- swa
- wol
- yor
- multilingual
datasets:
- masakhaner
---
# xlm-roberta-large-masakhaner
## Model description
**xlm-roberta-large-masakhaner** is the first **Named Entity Recognition** model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-large-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-large-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
## Eval results on Test set (F-score)
language|F1-score
-|-
amh |75.76
hau |91.75
ibo |86.26
kin |76.38
lug |84.64
luo |80.65
pcm |89.55
swa |89.48
wol |70.70
yor |82.05
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
psymon/QLoRa-polyglot-5.8b-translate
|
psymon
| 2023-06-25T06:53:47Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T02:54:27Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
NasimB/gpt2-dp-mod-aochild-10chars
|
NasimB
| 2023-06-25T06:53:44Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T03:14:38Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-mod-aochild-10chars
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-mod-aochild-10chars
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7077 | 0.27 | 500 | 5.6423 |
| 5.3468 | 0.54 | 1000 | 5.2154 |
| 5.0042 | 0.8 | 1500 | 4.9608 |
| 4.7637 | 1.07 | 2000 | 4.7969 |
| 4.5583 | 1.34 | 2500 | 4.6931 |
| 4.4721 | 1.61 | 3000 | 4.5939 |
| 4.3855 | 1.88 | 3500 | 4.5049 |
| 4.218 | 2.15 | 4000 | 4.4679 |
| 4.1202 | 2.41 | 4500 | 4.4175 |
| 4.105 | 2.68 | 5000 | 4.3697 |
| 4.0733 | 2.95 | 5500 | 4.3257 |
| 3.8601 | 3.22 | 6000 | 4.3344 |
| 3.8504 | 3.49 | 6500 | 4.3033 |
| 3.8507 | 3.76 | 7000 | 4.2759 |
| 3.8215 | 4.02 | 7500 | 4.2709 |
| 3.5828 | 4.29 | 8000 | 4.2887 |
| 3.6183 | 4.56 | 8500 | 4.2711 |
| 3.6264 | 4.83 | 9000 | 4.2489 |
| 3.5136 | 5.1 | 9500 | 4.2794 |
| 3.3547 | 5.36 | 10000 | 4.2895 |
| 3.383 | 5.63 | 10500 | 4.2727 |
| 3.3982 | 5.9 | 11000 | 4.2594 |
| 3.2002 | 6.17 | 11500 | 4.3133 |
| 3.1199 | 6.44 | 12000 | 4.3184 |
| 3.1483 | 6.71 | 12500 | 4.3123 |
| 3.1516 | 6.97 | 13000 | 4.3013 |
| 2.9083 | 7.24 | 13500 | 4.3587 |
| 2.9076 | 7.51 | 14000 | 4.3641 |
| 2.9176 | 7.78 | 14500 | 4.3616 |
| 2.8855 | 8.05 | 15000 | 4.3806 |
| 2.7292 | 8.32 | 15500 | 4.3978 |
| 2.7443 | 8.58 | 16000 | 4.4023 |
| 2.7445 | 8.85 | 16500 | 4.4046 |
| 2.702 | 9.12 | 17000 | 4.4125 |
| 2.6515 | 9.39 | 17500 | 4.4159 |
| 2.6552 | 9.66 | 18000 | 4.4170 |
| 2.6529 | 9.92 | 18500 | 4.4173 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
jameszhou02/michael-lora
|
jameszhou02
| 2023-06-25T06:52:42Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T06:52:40Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
teoha/openai-whisper-medium-PeftType.LORA-colab
|
teoha
| 2023-06-25T06:51:18Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T06:51:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
nolanaatama/mlycrsrvc750pchsvrs
|
nolanaatama
| 2023-06-25T05:19:58Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T04:47:27Z |
---
license: creativeml-openrail-m
---
|
blackmount8/open-llama-13b-open-instruct-ct2-int8_float16
|
blackmount8
| 2023-06-25T05:06:24Z | 1 | 0 |
transformers
|
[
"transformers",
"text-generation",
"en",
"dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf",
"license:cc",
"region:us"
] |
text-generation
| 2023-06-24T17:32:32Z |
---
inference: false
license: cc
datasets:
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# blackmount8/open-llama-13B-open-instruct-ct2-int8_float16
Int8_float16 version of [VMware/open-llama-13b-open-instruct](https://huggingface.co/VMware/open-llama-13b-open-instruct), quantized using CTranslate2.
## VMware/open-llama-13B-open-instruct
Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for `<b>`COMMERCIAL USE `</b>`. `<br>`
`<b>` NOTE `</b>` : The model was trained using the Alpaca prompt template
`<b>` NOTE `</b>` : Fast tokenizer results in incorrect encoding, set the ``use_fast = False`` parameter, when instantiating the tokenizer
## License
- `<b>`Commercially Viable `</b>`
- Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
- Language Model, ([openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)) is under apache-2.0
## Nomenclature
- Model : Open-llama
- Model Size: 13B parameters
- Dataset: Open-instruct-v1 (oasst, dolly, hhrlhf)
## Use in CTranslate2
```
import ctranslate2
from transformers import AutoTokenizer
model_name = "blackmount8/open-llama-13b-open-instruct-ct2-int8_float16"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, padding_side="left", truncation_side="left")
model = ctranslate2.Generator(model_name, device="auto", compute_type="int8_float16")
input_text = ["What is the meaning of stonehenge?", "Hello mate!"]
input_ids = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).input_ids
input_tokens = [tokenizer.convert_ids_to_tokens(ele) for ele in input_ids]
outputs = model.generate_batch(input_tokens, max_length=128)
output_tokens = [
ele.sequences_ids[0] for ele in outputs
]
output = tokenizer.batch_decode(output_tokens)
print(output)
```
|
Gayathri142214002/t5_qg_1
|
Gayathri142214002
| 2023-06-25T04:58:01Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T04:53:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5_qg_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_qg_1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.658 | 0.69 | 10 | 1.9854 |
| 1.7442 | 1.38 | 20 | 1.6146 |
| 1.3456 | 2.07 | 30 | 1.3937 |
| 0.9931 | 2.76 | 40 | 1.2447 |
| 0.9253 | 3.45 | 50 | 1.1519 |
| 0.7154 | 4.14 | 60 | 1.0958 |
| 0.6624 | 4.83 | 70 | 1.0645 |
| 0.6384 | 5.52 | 80 | 1.0412 |
| 0.4889 | 6.21 | 90 | 1.0323 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Raizel123/pamelasafitrilora
|
Raizel123
| 2023-06-25T04:37:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T04:34:50Z |
---
license: creativeml-openrail-m
---
|
razaali/swin-tiny-patch4-window7-224-finetuned-eurosat
|
razaali
| 2023-06-25T04:00:02Z | 211 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T03:25:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.977037037037037
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0662
- Accuracy: 0.9770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2501 | 1.0 | 190 | 0.1077 | 0.9626 |
| 0.1375 | 2.0 | 380 | 0.0892 | 0.9707 |
| 0.1324 | 3.0 | 570 | 0.0662 | 0.9770 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
andrewromitti/alzheimer_model_aug_deit5
|
andrewromitti
| 2023-06-25T03:58:45Z | 193 | 1 |
transformers
|
[
"transformers",
"pytorch",
"deit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T02:14:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: alzheimer_model_aug_deit5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9996875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# alzheimer_model_aug_deit5
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0012
- Accuracy: 0.9997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1234
- gradient_accumulation_steps: 10
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5045 | 1.0 | 212 | 0.1414 | 0.9522 |
| 0.0779 | 2.0 | 424 | 0.0222 | 0.9961 |
| 0.0156 | 3.0 | 637 | 0.0164 | 0.9941 |
| 0.0032 | 4.0 | 849 | 0.0044 | 0.9983 |
| 0.0004 | 4.99 | 1060 | 0.0012 | 0.9997 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
CJacobnriia/spatnzRVC
|
CJacobnriia
| 2023-06-25T03:56:17Z | 0 | 0 | null |
[
"en",
"region:us"
] | null | 2023-06-25T01:52:32Z |
---
language:
- en
---
This is an RVC model of spatnz (https://www.youtube.com/channel/UCcNPbOeFo-qM0wpis8Lwdig)

|
ardhies/dev
|
ardhies
| 2023-06-25T03:55:45Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T03:51:43Z |
---
license: creativeml-openrail-m
---
|
blackmount8/open-llama-13b-open-instruct-ct2-float16
|
blackmount8
| 2023-06-25T03:48:21Z | 4 | 0 |
transformers
|
[
"transformers",
"text-generation",
"en",
"dataset:VMware/open-instruct-v1-oasst-dolly-hhrlhf",
"license:cc",
"region:us"
] |
text-generation
| 2023-06-24T16:44:56Z |
---
inference: false
license: cc
datasets:
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# blackmount8/open-llama-13B-open-instruct-ct2-float16
Float16 version of [VMware/open-llama-13b-open-instruct](https://huggingface.co/VMware/open-llama-13b-open-instruct), quantized using CTranslate2.
## VMware/open-llama-13B-open-instruct
Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for `<b>`COMMERCIAL USE `</b>`. `<br>`
`<b>` NOTE `</b>` : The model was trained using the Alpaca prompt template
`<b>` NOTE `</b>` : Fast tokenizer results in incorrect encoding, set the ``use_fast = False`` parameter, when instantiating the tokenizer
## License
- `<b>`Commercially Viable `</b>`
- Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
- Language Model, ([openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)) is under apache-2.0
## Nomenclature
- Model : Open-llama
- Model Size: 13B parameters
- Dataset: Open-instruct-v1 (oasst, dolly, hhrlhf)
## Use in CTranslate2
```
import ctranslate2
from transformers import AutoTokenizer
model_name = "blackmount8/open-llama-13b-open-instruct-ct2-float16"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, padding_side="left", truncation_side="left")
model = ctranslate2.Generator(model_name, device="auto", compute_type="float16")
input_text = ["What is the meaning of stonehenge?", "Hello mate!"]
input_ids = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).input_ids
input_tokens = [tokenizer.convert_ids_to_tokens(ele) for ele in input_ids]
outputs = model.generate_batch(input_tokens, max_length=128)
output_tokens = [
ele.sequences_ids[0] for ele in outputs
]
output = tokenizer.batch_decode(output_tokens)
print(output)
```
|
duyhngoc/Wave2Vec2_OV_Vie
|
duyhngoc
| 2023-06-25T03:47:48Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"vivos",
"generated_from_trainer",
"dataset:vivos",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-21T10:58:36Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- vivos
- generated_from_trainer
datasets:
- vivos
metrics:
- wer
model-index:
- name: Wave2Vec2_OV_Vie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wave2Vec2_OV_Vie
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the VIVOS - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5894
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| No log | 0.27 | 100 | 3.9210 | 1.0 |
| No log | 0.55 | 200 | 3.4375 | 1.0 |
| No log | 0.82 | 300 | 3.4356 | 1.0 |
| No log | 1.1 | 400 | 3.4045 | 1.0 |
| 4.1866 | 1.37 | 500 | 3.4694 | 1.0 |
| 4.1866 | 1.65 | 600 | 3.6266 | 1.0 |
| 4.1866 | 1.92 | 700 | 3.5694 | 1.0 |
| 4.1866 | 2.19 | 800 | 3.5733 | 1.0 |
| 4.1866 | 2.47 | 900 | 3.6381 | 1.0 |
| 3.4376 | 2.74 | 1000 | 3.6604 | 1.0 |
| 3.4376 | 3.02 | 1100 | 3.5868 | 1.0 |
| 3.4376 | 3.29 | 1200 | 3.4988 | 1.0 |
| 3.4376 | 3.57 | 1300 | 3.5409 | 1.0 |
| 3.4376 | 3.84 | 1400 | 3.4883 | 1.0 |
| 3.4365 | 4.12 | 1500 | 3.6125 | 1.0 |
| 3.4365 | 4.39 | 1600 | 3.6123 | 1.0 |
| 3.4365 | 4.66 | 1700 | 3.5978 | 1.0 |
| 3.4365 | 4.94 | 1800 | 3.5693 | 1.0 |
| 3.4365 | 5.21 | 1900 | 3.5659 | 1.0 |
| 3.4339 | 5.49 | 2000 | 3.6234 | 1.0 |
| 3.4339 | 5.76 | 2100 | 3.5997 | 1.0 |
| 3.4339 | 6.04 | 2200 | 3.6529 | 1.0 |
| 3.4339 | 6.31 | 2300 | 3.5780 | 1.0 |
| 3.4339 | 6.58 | 2400 | 3.5844 | 1.0 |
| 3.4333 | 6.86 | 2500 | 3.5792 | 1.0 |
| 3.4333 | 7.13 | 2600 | 3.5468 | 1.0 |
| 3.4333 | 7.41 | 2700 | 3.5691 | 1.0 |
| 3.4333 | 7.68 | 2800 | 3.5408 | 1.0 |
| 3.4333 | 7.96 | 2900 | 3.5482 | 1.0 |
| 3.4294 | 8.23 | 3000 | 3.6070 | 1.0 |
| 3.4294 | 8.5 | 3100 | 3.5905 | 1.0 |
| 3.4294 | 8.78 | 3200 | 3.6018 | 1.0 |
| 3.4294 | 9.05 | 3300 | 3.6326 | 1.0 |
| 3.4294 | 9.33 | 3400 | 3.6214 | 1.0 |
| 3.4293 | 9.6 | 3500 | 3.6372 | 1.0 |
| 3.4293 | 9.88 | 3600 | 3.6215 | 1.0 |
| 3.4293 | 10.15 | 3700 | 3.5106 | 1.0 |
| 3.4293 | 10.43 | 3800 | 3.5066 | 1.0 |
| 3.4293 | 10.7 | 3900 | 3.5352 | 1.0 |
| 3.4295 | 10.97 | 4000 | 3.5129 | 1.0 |
| 3.4295 | 11.25 | 4100 | 3.6384 | 1.0 |
| 3.4295 | 11.52 | 4200 | 3.6019 | 1.0 |
| 3.4295 | 11.8 | 4300 | 3.5876 | 1.0 |
| 3.4295 | 12.07 | 4400 | 3.6207 | 1.0 |
| 3.4252 | 12.35 | 4500 | 3.5998 | 1.0 |
| 3.4252 | 12.62 | 4600 | 3.6216 | 1.0 |
| 3.4252 | 12.89 | 4700 | 3.6073 | 1.0 |
| 3.4252 | 13.17 | 4800 | 3.5567 | 1.0 |
| 3.4252 | 13.44 | 4900 | 3.5745 | 1.0 |
| 3.4274 | 13.72 | 5000 | 3.5738 | 1.0 |
| 3.4274 | 13.99 | 5100 | 3.5914 | 1.0 |
| 3.4274 | 14.27 | 5200 | 3.6004 | 1.0 |
| 3.4274 | 14.54 | 5300 | 3.5968 | 1.0 |
| 3.4274 | 14.81 | 5400 | 3.5908 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dxyy/monteCarlo-cartpolev1
|
dxyy
| 2023-06-25T03:34:19Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T03:21:22Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: monteCarlo-cartpolev1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 470.60 +/- 18.14
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
akera/whisper-medium-acholi
|
akera
| 2023-06-25T03:21:54Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-25T00:20:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-medium-acholi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-acholi
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7946
- eval_wer: 100.0
- eval_runtime: 77.0072
- eval_samples_per_second: 3.169
- eval_steps_per_second: 0.208
- epoch: 2.0
- step: 276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.2
|
zinccat/santacoder-ggml-quantized
|
zinccat
| 2023-06-25T02:20:50Z | 0 | 2 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-06-25T02:16:55Z |
---
license: bigcode-openrail-m
---
|
nbiish/learning-taxi-v3
|
nbiish
| 2023-06-25T02:14:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T02:14:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: learning-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="nbiish/learning-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nbiish/learning-FrozenLake-v1-4x4-noSlip
|
nbiish
| 2023-06-25T02:12:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T01:54:56Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: learning-FrozenLake-v1-4x4-noSlip
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nbiish/learning-FrozenLake-v1-4x4-noSlip", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.