modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
TirathP/Classifier
|
TirathP
| 2023-08-10T11:44:52Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-10T11:42:51Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: TirathP/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TirathP/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6822
- Validation Loss: 0.6966
- Train Accuracy: 1.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.0773 | 0.9665 | 1.0 | 0 |
| 0.9585 | 0.8375 | 1.0 | 1 |
| 0.8571 | 0.7712 | 1.0 | 2 |
| 0.7833 | 0.7278 | 1.0 | 3 |
| 0.6822 | 0.6966 | 1.0 | 4 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Pietro995/bloomz-560m_PROMPT_TUNING_CAUSAL_LMPROVA
|
Pietro995
| 2023-08-10T11:43:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-10T11:43:42Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
ryand1234/llama2-testing
|
ryand1234
| 2023-08-10T11:19:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-10T11:19:00Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
esantiago/llama2-qlora-finetunned-french
|
esantiago
| 2023-08-10T11:08:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-10T11:08:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
yezune/distilbert-base-uncased-distilled-clinc
|
yezune
| 2023-08-10T11:04:08Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-10T11:00:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9490322580645161
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2988
- Accuracy: 0.9490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.0983 | 1.0 | 318 | 2.2883 | 0.7423 |
| 1.7658 | 2.0 | 636 | 1.1722 | 0.8590 |
| 0.9156 | 3.0 | 954 | 0.6499 | 0.9177 |
| 0.5211 | 4.0 | 1272 | 0.4488 | 0.9326 |
| 0.3488 | 5.0 | 1590 | 0.3661 | 0.9455 |
| 0.267 | 6.0 | 1908 | 0.3309 | 0.9481 |
| 0.226 | 7.0 | 2226 | 0.3132 | 0.9487 |
| 0.2024 | 8.0 | 2544 | 0.3046 | 0.9487 |
| 0.191 | 9.0 | 2862 | 0.3014 | 0.9487 |
| 0.1853 | 10.0 | 3180 | 0.2988 | 0.9490 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
morell23/kaelakovalskia
|
morell23
| 2023-08-10T11:02:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-10T11:01:48Z |
---
license: creativeml-openrail-m
---
|
skshreyas714/lora-trained-xl-colab
|
skshreyas714
| 2023-08-10T11:01:49Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-10T08:58:39Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - skshreyas714/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
shajahan123/my-pet-cat
|
shajahan123
| 2023-08-10T10:52:52Z | 0 | 0 | null |
[
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-10T10:49:39Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat Dreambooth model trained by shajahan123 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VJCET91
Sample pictures of this concept:
.jpg)
|
MrD05/otherhalf-pt
|
MrD05
| 2023-08-10T10:47:56Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"text generation",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-10T10:20:35Z |
---
license: creativeml-openrail-m
language:
- en
thumbnail: null
tags:
- text generation
---
|
Ian-14/model_test
|
Ian-14
| 2023-08-10T10:46:45Z | 156 | 0 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"text-generation",
"custom_code",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-10T01:03:03Z |
---
pipeline_tag: text-generation
license: apache-2.0
language:
- zh
widget:
- text: "你好啊,O(∩_∩)O哈哈~"
example_title: "Sentiment analysis"
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had ..."
example_title: "向量化"
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book ..."
example_title: "Logic puzzles"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night ..."
example_title: "Reading comprehension"
---
### How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm2-6b-int4", trust_remote_code=True)
model = AutoModel.from_pretrained("THUDM/chatglm2-6b-int4", trust_remote_code=True).half().cuda()
model = model.eval()
text = "你好"
response, history = model.chat(tokenizer, text, history=[])
response
```
|
oussama/layoutlmv3-finetuned-invoice
|
oussama
| 2023-08-10T10:40:12Z | 107 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:sroie",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-23T21:29:16Z |
---
tags:
- generated_from_trainer
datasets:
- sroie
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-invoice
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: sroie
type: sroie
args: sroie
metrics:
- name: Precision
type: precision
value: 1.0
- name: Recall
type: recall
value: 1.0
- name: F1
type: f1
value: 1.0
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-invoice
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the sroie dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0018
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.0 | 100 | 0.0967 | 0.958 | 0.9716 | 0.9648 | 0.9956 |
| No log | 4.0 | 200 | 0.0222 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 6.0 | 300 | 0.0171 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| No log | 8.0 | 400 | 0.0136 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1307 | 10.0 | 500 | 0.0117 | 0.964 | 0.9777 | 0.9708 | 0.9962 |
| 0.1307 | 12.0 | 600 | 0.0099 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1307 | 14.0 | 700 | 0.0094 | 0.972 | 0.9858 | 0.9789 | 0.9971 |
| 0.1307 | 16.0 | 800 | 0.0071 | 0.9918 | 0.9838 | 0.9878 | 0.9983 |
| 0.1307 | 18.0 | 900 | 0.0026 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
| 0.0089 | 20.0 | 1000 | 0.0018 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0089 | 22.0 | 1100 | 0.0016 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0089 | 24.0 | 1200 | 0.0015 | 1.0 | 0.9980 | 0.9990 | 0.9998 |
| 0.0089 | 26.0 | 1300 | 0.0015 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
| 0.0089 | 28.0 | 1400 | 0.0014 | 0.9980 | 0.9980 | 0.9980 | 0.9998 |
| 0.0025 | 30.0 | 1500 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0025 | 32.0 | 1600 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0025 | 34.0 | 1700 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0025 | 36.0 | 1800 | 0.0010 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0025 | 38.0 | 1900 | 0.0010 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0019 | 40.0 | 2000 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
morell23/ghblistloff
|
morell23
| 2023-08-10T10:38:02Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-10T10:35:45Z |
---
license: creativeml-openrail-m
---
|
pssubitha/llama2-qlora-finetune-QA
|
pssubitha
| 2023-08-10T10:34:30Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-10T10:34:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
jules654/a2c-PandaReachDense-v3
|
jules654
| 2023-08-10T10:33:48Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T21:03:00Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aura-tfn/q-FrozenLake-v1-4x4-noSlippery
|
aura-tfn
| 2023-08-10T10:27:11Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-10T10:27:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="aura-tfn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tamiti1610001/bert-finetuned-squad
|
tamiti1610001
| 2023-08-10T10:19:39Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad_bn",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-10T07:22:35Z |
---
tags:
- generated_from_trainer
datasets:
- squad_bn
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [csebuetnlp/banglabert](https://huggingface.co/csebuetnlp/banglabert) on the squad_bn dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
abin-regi/my-pet-dog-xzk
|
abin-regi
| 2023-08-10T10:18:50Z | 19 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-10T10:14:54Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-xzk Dreambooth model trained by abin-regi following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VJCET421
Sample pictures of this concept:


|
yezune/distilbert-base-uncased-finetuned-clinc
|
yezune
| 2023-08-10T10:16:42Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-10T10:14:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9141935483870968
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7816
- Accuracy: 0.9142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2905 | 1.0 | 318 | 3.2788 | 0.7281 |
| 2.6269 | 2.0 | 636 | 1.8736 | 0.8297 |
| 1.5485 | 3.0 | 954 | 1.1619 | 0.8913 |
| 1.0177 | 4.0 | 1272 | 0.8662 | 0.9061 |
| 0.8035 | 5.0 | 1590 | 0.7816 | 0.9142 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
pknayak/whisper-small-dv
|
pknayak
| 2023-08-10T10:06:47Z | 74 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-09T14:31:45Z |
---
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - pkn
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: 'Common Voice 13 - pkn '
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.290677052543728
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - pkn
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 - pkn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1689
- Wer Ortho: 62.8317
- Wer: 13.2907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1252 | 1.63 | 500 | 0.1689 | 62.8317 | 13.2907 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
yezune/xlm-roberta-base-finetuned-panx-all
|
yezune
| 2023-08-10T10:05:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-10T09:59:43Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1732
- F1: 0.8570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3046 | 1.0 | 835 | 0.1930 | 0.8058 |
| 0.157 | 2.0 | 1670 | 0.1765 | 0.8397 |
| 0.1022 | 3.0 | 2505 | 0.1732 | 0.8570 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
norkart/mt5-large-no-info-extraction-3000
|
norkart
| 2023-08-10T09:56:11Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"no",
"nb",
"en",
"dataset:squad_v2",
"dataset:alexandrainst/scandi-qa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-10T08:34:22Z |
---
license: apache-2.0
datasets:
- squad_v2
- alexandrainst/scandi-qa
language:
- 'no'
- nb
- en
widget:
- text: >-
question: In what country is Normandy located? context: The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse ('Norman' comes from 'Norseman') raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.
- text: >-
question: Hvem er best i Foosball av Team AI og Team Arealplaner? context: question: Hvem er best i Foosball av Team AI og Team Arealplaner? context: Team AI har vist seg å være en bemerkelsesverdig sterk kandidat i Foosball, og flere faktorer bidrar til deres overlegenhet over Team Arealplaner. Den mest fremtredende faktoren er den avanserte evnen til maskinlæring og kunstig intelligens som Team AI benytter seg av. Dette gjør Team AI best. Denne evnen gjør at de kan analysere motstanderens spillmønstre i sanntid, tilpasse sin egen strategi og ta raske beslutninger for å maksimere sjansene for suksess. Dette gir dem en betydelig fordel når det gjelder å forutsi og reagere på motstanderens trekk, noe som resulterer i mer presise skudd og bedre forsvar. I tillegg har Team AI den utrolige evnen til å samhandle sømløst og koordinere handlingene sine. Deres nøyaktige timing og perfekte synkronisering i spillet gjør dem i stand til å utnytte hver mulighet til det fulle, uansett om de angriper eller forsvarer. Denne konsistente samhandlingen mellom spillerne deres gir dem en ekstra dimensjon av effektivitet og nøyaktighet, noe som er avgjørende i et høyhastighetsspill som Foosball. Videre har Team AI den fordelen av å kunne analysere og tilpasse seg ulike motstanderstiler. Uansett om Team Arealplaner bruker en defensiv eller offensiv tilnærming, er Team AI i stand til å tilpasse seg raskt og utnytte svakheter i motstanderens strategi. Dette gjør dem til en allsidig og krevende motstander å stå overfor, da de kan tilpasse seg og overvinne ulike utfordringer som Team Arealplaner kan presentere. I sum viser Team AI en imponerende kombinasjon av avansert teknologi, nøyaktig samhandling og tilpasningsevne som gir dem en tydelig fordel over Team Arealplaner i Foosball. Deres evne til å forutsi, tilpasse seg og koordinere gir dem en uovertruffen effektivitet og suksessrate, noe som gjør dem til et overlegent lag i denne spennende sporten.
---
This model is based on the ```norkart/mt5-large-no``` checkpoint and then trained for another 2000 steps on the squad_v2 dataset, then 1000 steps on the norwegian split of alexandrainst/scandi-qa.
Given a question and a context, the model can find the answer in the context. The answer does not need to be stated verbatim in the context.
Format: "question: 'your question' context: 'context to the question'"
|
iliyaML/t5-small-billsum
|
iliyaML
| 2023-08-10T09:52:25Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-10T09:42:37Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: t5-small-billsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1528
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-billsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5246
- Rouge1: 0.1528
- Rouge2: 0.0586
- Rougel: 0.1291
- Rougelsum: 0.1292
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8551 | 0.1284 | 0.0348 | 0.1081 | 0.1085 | 19.0 |
| No log | 2.0 | 124 | 2.6404 | 0.1373 | 0.0453 | 0.1147 | 0.1147 | 19.0 |
| No log | 3.0 | 186 | 2.5665 | 0.1423 | 0.0494 | 0.1195 | 0.1192 | 19.0 |
| No log | 4.0 | 248 | 2.5342 | 0.149 | 0.055 | 0.1259 | 0.1257 | 19.0 |
| No log | 5.0 | 310 | 2.5246 | 0.1528 | 0.0586 | 0.1291 | 0.1292 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
yezune/xlm-roberta-base-finetuned-panx-de-fr
|
yezune
| 2023-08-10T09:49:34Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-10T09:44:57Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1627
- F1: 0.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.291 | 1.0 | 715 | 0.1809 | 0.8299 |
| 0.1468 | 2.0 | 1430 | 0.1512 | 0.8516 |
| 0.0936 | 3.0 | 2145 | 0.1627 | 0.8586 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
rossevine/wav2vec2_Indonesia_4
|
rossevine
| 2023-08-10T09:29:52Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-06T15:39:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_Indonesia_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_Indonesia_4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3147
- Wer: 0.5914
## Model description
Model yang dilatih dengan data train common voice dan data test data perkuliahan
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9949 | 3.23 | 400 | 1.3340 | 0.8916 |
| 0.4469 | 6.45 | 800 | 1.0507 | 0.6859 |
| 0.2003 | 9.68 | 1200 | 1.1115 | 0.6369 |
| 0.1432 | 12.9 | 1600 | 1.1307 | 0.6297 |
| 0.1138 | 16.13 | 2000 | 1.2157 | 0.6369 |
| 0.089 | 19.35 | 2400 | 1.2834 | 0.6058 |
| 0.0712 | 22.58 | 2800 | 1.3283 | 0.5947 |
| 0.057 | 25.81 | 3200 | 1.3345 | 0.5827 |
| 0.0467 | 29.03 | 3600 | 1.3147 | 0.5914 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
WinSenX/sd-class-butterflies-32
|
WinSenX
| 2023-08-10T09:27:17Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-08-10T09:26:59Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('WinSenX/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
mardrake/lora-trained-xl-colab
|
mardrake
| 2023-08-10T09:23:38Z | 4 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-10T08:05:54Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - mardrake/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
tmeskuti/distilbase-trained-sts-uncased
|
tmeskuti
| 2023-08-10T09:23:28Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-08-10T09:18:44Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
chargoddard/ypotryll-22b-gptq
|
chargoddard
| 2023-08-10T09:17:25Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"dataset:openai/summarize_from_feedback",
"dataset:ehartford/dolphin",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-10T09:04:43Z |
---
datasets:
- jondurbin/airoboros-gpt4-1.4.1
- ehartford/wizard_vicuna_70k_unfiltered
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
- openai/summarize_from_feedback
- ehartford/dolphin
tags:
- llama
---
Merged and quantized version of [ypotryll-22b-qlora](https://huggingface.co/chargoddard/ypotryll-22b-qlora).
Trained for instruction-following, roleplay, and chat on a patchwork of datasets to match the [base model](https://huggingface.co/chargoddard/llama2-22b-blocktriangular). Uses the following prompt format:
```
***System:You are a helpful assistant, who always gives a response to any request. ***Query:Here is a riddle: 5 sisters are busy. Ann is reading, Rose is cooking, Lorraine is playing chess and Mary is doing laundry. What is the fifth sister doing? ***Response:The fifth sister is sleeping. ***Query:Well, you tried. ***Response:I did my best!
```
A little bit dumb, but good for creative scenarios.
Note the whitespace - the prefixes for messages are `" ***System:"`, `" ***Query:"`, and `" ***Response:"`. This is important as `"***"` and `" ***"` are two entirely different tokens.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
cuixing/textual_inversion_object_style_vangoghsingle08101439
|
cuixing
| 2023-08-10T09:03:40Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-10T06:40:34Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - cuixing/textual_inversion_object_style_vangoghsingle08101439
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
chriskim2273/IOTNation_Classification_Model_0.1
|
chriskim2273
| 2023-08-10T08:56:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-10T05:38:20Z |
---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IOTNation_Classification_Model_0.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_Classification_Model_0.1
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0105
- Accuracy: 0.9988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
prudhvirazz/t5-small-modified
|
prudhvirazz
| 2023-08-10T08:54:44Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-10T08:40:17Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: t5-small-modified
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-modified
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 5.2728 |
| 5.4402 | 2.0 | 500 | 4.9298 |
| 5.4402 | 3.0 | 750 | 4.8251 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
hedong/distilhubert-finetuned-gtzan
|
hedong
| 2023-08-10T08:50:56Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-10T01:31:59Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6340
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9747 | 1.0 | 112 | 1.7879 | 0.56 |
| 1.322 | 1.99 | 224 | 1.2554 | 0.67 |
| 1.0047 | 3.0 | 337 | 0.9381 | 0.73 |
| 0.8037 | 4.0 | 449 | 0.8347 | 0.77 |
| 0.5617 | 4.99 | 561 | 0.7889 | 0.76 |
| 0.4773 | 6.0 | 674 | 0.6480 | 0.84 |
| 0.2749 | 6.99 | 786 | 0.6533 | 0.79 |
| 0.1649 | 8.0 | 899 | 0.6974 | 0.79 |
| 0.1132 | 9.0 | 1011 | 0.6771 | 0.81 |
| 0.1243 | 9.97 | 1120 | 0.6340 | 0.83 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dvs/videomae-base-finetuned-kinetics-finetuned-movienet
|
dvs
| 2023-08-10T08:50:52Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base-finetuned-kinetics",
"base_model:finetune:MCG-NJU/videomae-base-finetuned-kinetics",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-08-10T06:13:15Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base-finetuned-kinetics
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-kinetics-finetuned-movienet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-kinetics-finetuned-movienet
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.8737
- eval_accuracy: 0.7865
- eval_runtime: 124.9385
- eval_samples_per_second: 1.537
- eval_steps_per_second: 0.192
- epoch: 4.1
- step: 930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- training_steps: 1850
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
smangrul/peft-lora-starcoderbase3b-personal-copilot-A100-40GB-colab
|
smangrul
| 2023-08-10T08:35:19Z | 15 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:bigcode/starcoderbase-3b",
"base_model:adapter:bigcode/starcoderbase-3b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-08-09T20:17:40Z |
---
license: bigcode-openrail-m
base_model: bigcode/starcoderbase-3b
tags:
- generated_from_trainer
model-index:
- name: peft-lora-starcoderbase3b-personal-copilot-A100-40GB-colab
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-lora-starcoderbase3b-personal-copilot-A100-40GB-colab
This model is a fine-tuned version of [bigcode/starcoderbase-3b](https://huggingface.co/bigcode/starcoderbase-3b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 30
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8168 | 0.05 | 100 | 0.7807 |
| 0.7961 | 0.1 | 200 | 0.7197 |
| 0.7837 | 0.15 | 300 | 0.6603 |
| 0.7053 | 0.2 | 400 | 0.6371 |
| 0.6132 | 0.25 | 500 | 0.6282 |
| 0.6584 | 0.3 | 600 | 0.6107 |
| 0.621 | 0.35 | 700 | 0.5934 |
| 0.6961 | 0.4 | 800 | 0.5877 |
| 0.592 | 0.45 | 900 | 0.5833 |
| 0.6967 | 0.5 | 1000 | 0.5746 |
| 0.6382 | 0.55 | 1100 | 0.5563 |
| 0.6815 | 0.6 | 1200 | 0.5436 |
| 0.5483 | 0.65 | 1300 | 0.5439 |
| 0.7172 | 0.7 | 1400 | 0.5401 |
| 0.5479 | 0.75 | 1500 | 0.5390 |
| 0.9422 | 0.8 | 1600 | 0.5357 |
| 0.5503 | 0.85 | 1700 | 0.5303 |
| 0.5928 | 0.9 | 1800 | 0.5322 |
| 0.5513 | 0.95 | 1900 | 0.5176 |
| 0.6314 | 1.0 | 2000 | 0.5038 |
### Framework versions
- PEFT 0.5.0.dev0
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Hemanth-thunder/stable_diffusion_lora
|
Hemanth-thunder
| 2023-08-10T08:21:14Z | 3 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"autotrain",
"base_model:SG161222/Realistic_Vision_V1.4",
"base_model:finetune:SG161222/Realistic_Vision_V1.4",
"region:us"
] |
text-to-image
| 2023-08-06T05:52:45Z |
---
base_model: SG161222/Realistic_Vision_V1.4
instance_prompt: hmat
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Test enoder was trained.
|
MochaPixel/Lia
|
MochaPixel
| 2023-08-10T08:19:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-18T11:55:18Z |
---
license: creativeml-openrail-m
---
|
TheTravellingEngineer/llama2-7b-chat-hf-v4
|
TheTravellingEngineer
| 2023-08-10T08:18:44Z | 1,547 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-10T07:28:43Z |
The base model is meta's Llama-2-7b-chat-hf. It was finetuned using SFT and the openassistant/oasst1 dataset and the model prompt is similar to the original Guanaco model.
This repo contains the merged fp16 model.
**Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.**
---
- license:
- llama2 <br>
- datasets:
- openassistant/oasst1 <br>
- language:
- en <br>
- reference: https://gist.github.com/younesbelkada/9f7f75c94bdc1981c8ca5cc937d4a4da
---
|
ThuyNT03/distilbert-base-uncased-multil-cls-legal
|
ThuyNT03
| 2023-08-10T08:05:47Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-10T00:09:04Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-multil-cls-legal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-multil-cls-legal
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5448
- Accuracy: 0.9022
- F1: 0.9015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 2.67 | 1.0 | 396 | 1.9327 | 0.5209 | 0.4806 |
| 1.5362 | 2.0 | 792 | 1.0998 | 0.7061 | 0.6869 |
| 0.8991 | 3.0 | 1188 | 0.7546 | 0.8013 | 0.7975 |
| 0.5899 | 4.0 | 1584 | 0.6136 | 0.8403 | 0.8392 |
| 0.4082 | 5.0 | 1980 | 0.5527 | 0.8601 | 0.8589 |
| 0.2874 | 6.0 | 2376 | 0.5200 | 0.8736 | 0.8731 |
| 0.2136 | 7.0 | 2772 | 0.4991 | 0.8831 | 0.8815 |
| 0.1564 | 8.0 | 3168 | 0.4946 | 0.8853 | 0.8843 |
| 0.1123 | 9.0 | 3564 | 0.4814 | 0.8928 | 0.8920 |
| 0.0866 | 10.0 | 3960 | 0.4959 | 0.8912 | 0.8908 |
| 0.0685 | 11.0 | 4356 | 0.5060 | 0.8928 | 0.8923 |
| 0.0508 | 12.0 | 4752 | 0.5114 | 0.8997 | 0.8989 |
| 0.037 | 13.0 | 5148 | 0.5199 | 0.8978 | 0.8971 |
| 0.0316 | 14.0 | 5544 | 0.5236 | 0.9003 | 0.8993 |
| 0.0243 | 15.0 | 5940 | 0.5253 | 0.9022 | 0.9015 |
| 0.021 | 16.0 | 6336 | 0.5385 | 0.9025 | 0.9019 |
| 0.0177 | 17.0 | 6732 | 0.5396 | 0.9038 | 0.9032 |
| 0.014 | 18.0 | 7128 | 0.5449 | 0.9025 | 0.9018 |
| 0.014 | 19.0 | 7524 | 0.5467 | 0.9010 | 0.9002 |
| 0.0103 | 20.0 | 7920 | 0.5448 | 0.9022 | 0.9015 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jubanbhura/lora-trained-xl-colab
|
jubanbhura
| 2023-08-10T08:02:11Z | 2 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-10T06:14:27Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: digital badge designs
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - jubanbhura/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on digital badge designs using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Geotrend/distilbert-base-en-es-zh-cased
|
Geotrend
| 2023-08-10T08:02:08Z | 142 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"en",
"es",
"zh",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- multilingual
- en
- es
- zh
datasets: wikipedia
license: apache-2.0
---
# distilbert-base-en-es-zh-cased
We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages.
Our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-es-zh-cased")
model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-es-zh-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermdistilbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
Rida06/bert-finetuned-ner
|
Rida06
| 2023-08-10T07:57:30Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-08T08:29:16Z |
---
license: apache-2.0
base_model: Bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: Rida06/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rida06/bert-finetuned-ner
This model is a fine-tuned version of [Bert-base-cased](https://huggingface.co/Bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1762
- Validation Loss: 0.0705
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.1}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1762 | 0.0705 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.13.0
- Datasets 2.14.2
- Tokenizers 0.11.0
|
perion/ai-avatar
|
perion
| 2023-08-10T07:55:27Z | 11 | 5 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-02-22T16:05:50Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
Test prompt: Portrait of perion man as thomas shelby in peaky blinders, highly detailed digital painting, artstation, concept art, smooth, sharp focus, illustration
Sample images:

|
thisiskeithkwan/cantomed-base
|
thisiskeithkwan
| 2023-08-10T07:53:41Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"yue",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-10T04:18:46Z |
---
language:
- yue
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper medium 12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper medium 12
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3270
- Cer: 42.0122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 8000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.8931 | 1.52 | 1000 | 1.0926 | 48.9439 |
| 0.3041 | 3.03 | 2000 | 1.1069 | 49.5474 |
| 0.1319 | 4.55 | 3000 | 1.1925 | 45.4016 |
| 0.0324 | 6.06 | 4000 | 1.2592 | 44.3186 |
| 0.0245 | 7.58 | 5000 | 1.3014 | 44.2359 |
| 0.0061 | 9.09 | 6000 | 1.3185 | 43.3472 |
| 0.0031 | 10.61 | 7000 | 1.3266 | 42.2767 |
| 0.0007 | 12.12 | 8000 | 1.3270 | 42.0122 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jakezou/rl_course_vizdoom_health_gathering_supreme
|
jakezou
| 2023-08-10T07:41:19Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-10T07:41:13Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.63 +/- 5.23
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r jakezou/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
newronai/llama-2-7b-Chat-QLoRA-Trial1
|
newronai
| 2023-08-10T07:32:04Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-10T07:31:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
rossevine/wav2vec2_indonesia_6
|
rossevine
| 2023-08-10T07:27:07Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-10T05:34:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_Indonesia_6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_Indonesia_6
This model is a fine-tuned version of [facebook/wav2vec2-base-100h](https://huggingface.co/facebook/wav2vec2-base-100h) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7559
- Wer: 1.0232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1807 | 3.23 | 400 | 1.3655 | 1.0052 |
| 0.5608 | 6.45 | 800 | 1.3604 | 1.0312 |
| 0.3302 | 9.68 | 1200 | 1.3724 | 1.0355 |
| 0.2405 | 12.9 | 1600 | 1.4350 | 1.0142 |
| 0.1883 | 16.13 | 2000 | 1.5079 | 1.0213 |
| 0.1535 | 19.35 | 2400 | 1.5038 | 1.0251 |
| 0.1307 | 22.58 | 2800 | 1.7026 | 1.0189 |
| 0.1104 | 25.81 | 3200 | 1.7072 | 1.0090 |
| 0.0921 | 29.03 | 3600 | 1.7559 | 1.0232 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
hashu/my-pet-cat-xyz
|
hashu
| 2023-08-10T07:12:37Z | 0 | 0 | null |
[
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-10T07:09:43Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-xyz Dreambooth model trained by hashu following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VJCET527
Sample pictures of this concept:
.jpg)
|
yyyy1992/my_awesome_wnut_model
|
yyyy1992
| 2023-08-10T06:58:22Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-08-10T06:51:33Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_awesome_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5096660808435852
- name: Recall
type: recall
value: 0.26876737720111216
- name: F1
type: f1
value: 0.35194174757281554
- name: Accuracy
type: accuracy
value: 0.9392501389423282
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0772
- Precision: 0.5097
- Recall: 0.2688
- F1: 0.3519
- Accuracy: 0.9393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.0816 | 0.4192 | 0.1779 | 0.2498 | 0.9351 |
| No log | 2.0 | 426 | 0.0772 | 0.5097 | 0.2688 | 0.3519 | 0.9393 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.11.0
- Tokenizers 0.13.3
|
weiren119/traditional_chinese_qlora_llama2_13b_adapter
|
weiren119
| 2023-08-10T06:57:43Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-10T06:56:58Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
cuixing/textual_inversion_object_style_vangogh08101212-newstyle
|
cuixing
| 2023-08-10T06:51:27Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-10T04:12:51Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - cuixing/textual_inversion_object_style_vangogh08101212-newstyle
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
tanviraumi/q-FrozenLake-v1-4x4-noSlippery
|
tanviraumi
| 2023-08-10T06:40:11Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-10T06:40:08Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tanviraumi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dminhk/dog-example-sdxl-lora
|
dminhk
| 2023-08-10T06:35:42Z | 5 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-10T05:43:30Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - dminhk/dog-example-sdxl-lora
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
Data set: https://huggingface.co/datasets/diffusers/dog-example
Example images:




|
deepvk/bert-base-uncased
|
deepvk
| 2023-08-10T06:23:07Z | 756 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"ru",
"en",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-02-07T14:51:11Z |
---
license: apache-2.0
language:
- ru
- en
library_name: transformers
pipeline_tag: feature-extraction
---
# BERT-base
<!-- Provide a quick summary of what the model is/does. -->
Pretrained bidirectional encoder for russian language.
The model was trained using standard MLM objective on large text corpora including open social data.
See `Training Details` section for more information.
⚠️ This model contains only the encoder part without any pretrained head.
- **Developed by:** [deepvk](https://vk.com/deepvk)
- **Model type:** BERT
- **Languages:** Mostly russian and small fraction of other languages
- **License:** Apache 2.0
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("deepvk/bert-base-uncased")
model = AutoModel.from_pretrained("deepvk/bert-base-uncased")
text = "Привет, мир!"
inputs = tokenizer(text, return_tensors='pt')
predictions = model(**inputs)
```
## Training Details
The model was trained using the NVIDIA source code. See the [pretraining documentation](https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.md#training-process) for details.
### Training Data
250 GB of filtered texts in total.
A mix of the following data: Wikipedia, Books and Social corpus.
### Architecture details
| Argument | Value |
|-------------------------|----------------|
|Encoder layers | 12 |
|Encoder attention heads | 12 |
|Encoder embed dim | 768 |
|Encoder ffn embed dim | 3,072 |
|Activation function | GeLU |
|Attention dropout | 0.1 |
|Dropout | 0.1 |
|Max positions | 512 |
|Vocab size | 36000 |
|Tokenizer type | BertTokenizer |
## Evaluation
We evaluated the model on [Russian Super Glue](https://russiansuperglue.com/) dev set.
The best result in each task is marked in bold.
All models have the same size except the distilled version of DeBERTa.
| Model | RCB | PARus | MuSeRC | TERRa | RUSSE | RWSD | DaNetQA | Score |
|------------------------------------------------------------------------|-----------|--------|---------|-------|---------|---------|---------|-----------|
| [vk-deberta-distill](https://huggingface.co/deepvk/deberta-v1-distill) | 0.433 | 0.56 | 0.625 | 0.59 | 0.943 | 0.569 | 0.726 | 0.635 |
| [vk-roberta-base](https://huggingface.co/deepvk/roberta-base) | 0.46 | 0.56 | 0.679 | 0.769 | 0.960 | 0.569 | 0.658 | 0.665 |
| [vk-deberta-base](https://huggingface.co/deepvk/deberta-v1-base) | 0.450 |**0.61**|**0.722**| 0.704 | 0.948 | 0.578 |**0.76** |**0.682** |
| [vk-bert-base](https://huggingface.co/deepvk/bert-base-uncased) | 0.467 | 0.57 | 0.587 | 0.704 | 0.953 |**0.583**| 0.737 | 0.657 |
| [sber-bert-base](https://huggingface.co/ai-forever/ruBert-base) | **0.491** |**0.61**| 0.663 | 0.769 |**0.962**| 0.574 | 0.678 | 0.678 |
|
nullday/immersiveL-exp
|
nullday
| 2023-08-10T06:21:53Z | 64 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"translation",
"gpt-style",
"chinese",
"english",
"zh",
"en",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-07T07:30:25Z |
---
language:
- zh
- en
tags:
- translation
- gpt-style
- chinese
- english
license: "bigscience-bloom-rail-1.0"
---
## English:
### ImmersiveL Model on Hugging Face
This model, available on Hugging Face under `funstoryai/immersiveL-exp`, is a GPT-like model designed specifically for English-Chinese and Chinese-English translations.
**Recommended Prompts:**
For English to Chinese:
```
下面是一段英文文本,请将它翻译成中文。
{terms}
#英文文本:
{input}
#中文翻译:
```
For Chinese to English:
```
下面是一段中文文本,请将它翻译成英文。
{terms}
#中文文本:
{input}
#英文翻译:
```
For the corresponding GitHub project, please visit: [ImmersiveL on GitHub](https://github.com/immersive-translate/ImmersiveL).
<https://github.com/immersive-translate/ImmersiveL>
---
## 中文:
### Hugging Face 上的 ImmersiveL 模型
此模型在 Hugging Face 的 `funstoryai/immersiveL-exp` 下可用,是专为英汉和汉英翻译设计的类GPT模型。
**推荐提示词:**
英译中:
```
下面是一段英文文本,请将它翻译成中文。
{terms}
#英文文本:
{input}
#中文翻译:
```
中译英:
```
下面是一段中文文本,请将它翻译成英文。
{terms}
#中文文本:
{input}
#英文翻译:
```
对应的 GitHub 项目地址为: [ImmersiveL on GitHub](https://github.com/immersive-translate/ImmersiveL).
<https://github.com/immersive-translate/ImmersiveL>
|
HG7/ReQLoRA_QKVO8
|
HG7
| 2023-08-10T06:01:24Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-10T06:01:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
deepvk/deberta-v1-distill
|
deepvk
| 2023-08-10T05:57:02Z | 4,361 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta",
"feature-extraction",
"ru",
"en",
"arxiv:1910.01108",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-03-17T11:20:51Z |
---
license: apache-2.0
language:
- ru
- en
library_name: transformers
pipeline_tag: feature-extraction
---
# DeBERTa-distill
<!-- Provide a quick summary of what the model is/does. -->
Pretrained bidirectional encoder for russian language.
The model was trained using standard MLM objective on large text corpora including open social data.
See `Training Details` section for more information.
⚠️ This model contains only the encoder part without any pretrained head.
- **Developed by:** [deepvk](https://vk.com/deepvk)
- **Model type:** DeBERTa
- **Languages:** Mostly russian and small fraction of other languages
- **License:** Apache 2.0
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("deepvk/deberta-v1-distill")
model = AutoModel.from_pretrained("deepvk/deberta-v1-distill")
text = "Привет, мир!"
inputs = tokenizer(text, return_tensors='pt')
predictions = model(**inputs)
```
## Training Details
### Training Data
400 GB of filtered and deduplicated texts in total.
A mix of the following data: Wikipedia, Books, Twitter comments, Pikabu, Proza.ru, Film subtitles, News websites, and Social corpus.
#### Deduplication procedure
1. Calculate shingles with size of 5
2. Calculate MinHash with 100 seeds → for every sample (text) have a hash of size 100
3. Split every hash into 10 buckets → every bucket, which contains (100 / 10) = 10 numbers, get hashed into 1 hash → we have 10 hashes for every sample
4. For each bucket find duplicates: find samples which have the same hash → calculate pair-wise jaccard similarity → if the similarity is >0.7 than it's a duplicate
5. Gather duplicates from all the buckets and filter
### Training Hyperparameters
| Argument | Value |
|--------------------|----------------------|
| Training regime | fp16 mixed precision |
| Optimizer | AdamW |
| Adam betas | 0.9,0.98 |
| Adam eps | 1e-6 |
| Weight decay | 1e-2 |
| Batch size | 3840 |
| Num training steps | 100k |
| Num warm-up steps | 5k |
| LR scheduler | Cosine |
| LR | 5e-4 |
| Gradient norm | 1.0 |
The model was trained on a machine with 8xA100 for approximately 15 days.
### Architecture details
| Argument | Value |
|-------------------------|----------------|
|Encoder layers | 6 |
|Encoder attention heads | 12 |
|Encoder embed dim | 768 |
|Encoder ffn embed dim | 3,072 |
|Activation function | GeLU |
|Attention dropout | 0.1 |
|Dropout | 0.1 |
|Max positions | 512 |
|Vocab size | 50266 |
|Tokenizer type | Byte-level BPE |
### Distilation
In our distillation procedure, we follow [SANH et al.](https://arxiv.org/abs/1910.01108). The student is initialized from the [teacher](https://huggingface.co/deepvk/deberta-v1-base) by taking only every second layer. We use the MLM loss and CE loss with coefficients of 0.5.
## Evaluation
We evaluated the model on [Russian Super Glue](https://russiansuperglue.com/) dev set.
The best result in each task is marked in bold.
All models have the same size except the distilled version of DeBERTa.
| Model | RCB | PARus | MuSeRC | TERRa | RUSSE | RWSD | DaNetQA | Score |
|------------------------------------------------------------------------|-----------|--------|---------|-------|---------|---------|---------|-----------|
| [vk-deberta-distill](https://huggingface.co/deepvk/deberta-v1-distill) | 0.433 | 0.56 | 0.625 | 0.59 | 0.943 | 0.569 | 0.726 | 0.635 |
| [vk-roberta-base](https://huggingface.co/deepvk/roberta-base) | 0.46 | 0.56 | 0.679 | 0.769 | 0.960 | 0.569 | 0.658 | 0.665 |
| [vk-deberta-base](https://huggingface.co/deepvk/deberta-v1-base) | 0.450 |**0.61**|**0.722**| 0.704 | 0.948 | 0.578 |**0.76** |**0.682** |
| [vk-bert-base](https://huggingface.co/deepvk/bert-base-uncased) | 0.467 | 0.57 | 0.587 | 0.704 | 0.953 |**0.583**| 0.737 | 0.657 |
| [sber-bert-base](https://huggingface.co/ai-forever/ruBert-base) | **0.491** |**0.61**| 0.663 | 0.769 |**0.962**| 0.574 | 0.678 | 0.678 |
|
Bastian1111/dqn-SpaceInvadersNoFrameskip-v4
|
Bastian1111
| 2023-08-10T05:52:53Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-06T04:19:13Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 762.50 +/- 300.08
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bastian1111 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bastian1111 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Bastian1111
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
DAMO-NLP-MT/polylm-13b
|
DAMO-NLP-MT
| 2023-08-10T05:50:39Z | 1,615 | 53 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"custom_code",
"zh",
"en",
"es",
"fr",
"pt",
"ru",
"de",
"it",
"ar",
"ja",
"ko",
"th",
"vi",
"id",
"nl",
"pl",
"tr",
"he",
"arxiv:2307.06018",
"arxiv:2104.09864",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-13T13:48:44Z |
---
language:
- zh
- en
- es
- fr
- pt
- ru
- de
- it
- ar
- ja
- ko
- th
- vi
- id
- nl
- pl
- tr
- he
tags:
- text-generation
license: apache-2.0
---
# Model Card for PolyLM (a polyglot large language model)
## Table of Contents
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
5. [Next Steps](#next-steps)
6. [Citation](#citation)
# Model Details
## Abstract
> Large language models (LLMs) demonstrate remarkable ability to comprehend, reason, and generate following nature language instructions. However, the development of LLMs has been primarily focused on high-resource languages, such as English, thereby limiting their applicability and research in other languages. Consequently, we present PolyLM, a multilingual LLM trained on 640 billion (B) tokens, avaliable in two model sizes: 1.7B and 13B. To enhance its multilingual capabilities, we 1) integrate bilingual data into training data; and 2) adopt a curriculum learning strategy that increases the proportion of non-English data from 30% in the first stage to 60% in the final stage during pre-training. Further, we propose a multilingual self-instruct method which automatically generates 132.7K diverse multilingual instructions for model fine-tuning. To assess the model's performance, we collect several existing multilingual tasks, including multilingual understanding, question answering, generation, and translation. Extensive experiments show that PolyLM surpasses other open-source models such as LLaMA and BLOOM on multilingual tasks while maintaining comparable performance in English.
## Model Description
- **Model type:** Decoder-only Language model
- **Language(s) (NLP):** Chinese, English, Spanish, German, French, Portuguese, Russian, Italian, Arabic, Japanese, Korean, Thai, Vietnamese, Indonesian, Polish, Turkish, Dutch, Hebrew
- **License:** Apache 2.0
- **Original Checkpoints:** [Modelscope DAMO PolyLM-13B](https://www.modelscope.cn/models/damo/nlp_polylm_13b_text_generation/summary)
- **Link to paper:** [here](https://arxiv.org/pdf/2307.06018.pdf)
- **Number fotmat:** bf16
- **Total seen tokens:** 640 billion tokens
- **Version:** Version 1.0 / 12 July 2023
# Usage
Find below some example scripts on how to use the model in `transformers`:
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-MT/polylm-13b", legacy=False, use_fast=False)
model = AutoModelForCausalLM.from_pretrained("DAMO-NLP-MT/polylm-13b", device_map="auto", trust_remote_code=True)
model.eval()
input_doc = f"Beijing is the capital of China.\nTranslate this sentence from English to Chinese."
inputs = tokenizer(input_doc, return_tensors="pt")
generate_ids = model.generate(
inputs.input_ids,
attention_mask=inputs.attention_mask,
do_sample=False,
num_beams=4,
max_length=128,
early_stopping=True
)
decoded = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(f">>> {decoded}")
### results
### Beijing is the capital of China.\nTranslate this sentence from English to Chinese.\\n北京是中华人民共和国的首都。\n ...
```
</details>
# Uses
## Direct Use and Downstream Use
> The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
See the [research paper](https://arxiv.org/pdf/2307.06018.pdf) for further details.
## Out-of-Scope Use
More information needed.
# Bias, Risks, and Limitations
The information below in this section are copied from the model's [official model card](https://arxiv.org/pdf/2307.06018.pdf):
> Our contributions are fully methodological: adding the support of multilingualism to LLM during training and SFT phases. It is unavoidable that PolyLM might exhibit several common deficiencies of language models, e.g. hallucination and toxicity. PolyLM should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
# Next Steps
We are continuously enhancing the capabilities of PolyLM by focusing on the following aspects:
1. Replacement of absolute position embeddings with RoPE, as outlined in the research paper [here](https://arxiv.org/abs/2104.09864).
2. Expansion of window size to more than 10,000.
3. Verification of lightweight techniques to quickly enhance multilingual quality, especially for low-resource languages.
# Citation
**BibTeX:**
```bibtex
@misc{wei2023polylm,
title={PolyLM: An Open Source Polyglot Large Language Model},
author={Xiangpeng Wei and Haoran Wei and Huan Lin and Tianhao Li and Pei Zhang and Xingzhang Ren and Mei Li and Yu Wan and Zhiwei Cao and Binbin Xie and Tianxiang Hu and Shangjie Li and Binyuan Hui and Bowen Yu and Dayiheng Liu and Baosong Yang and Fei Huang and Jun Xie},
year={2023},
eprint={2307.06018},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mchablani/Llama-2-7b-chat-hf-mini-lawyer-chat
|
mchablani
| 2023-08-10T05:36:12Z | 2 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-08-05T03:54:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
JacobAshwin/donut-base-slips
|
JacobAshwin
| 2023-08-10T05:26:01Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-07-25T22:15:27Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-slips
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-slips
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jonalkw/Reinforce-pixelcopter
|
jonalkw
| 2023-08-10T05:25:14Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-10T05:25:11Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 9.60 +/- 12.56
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nicbull/DialoGPT-medium-leric
|
nicbull
| 2023-08-10T04:37:18Z | 150 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"chat",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-10T04:25:26Z |
---
language:
- en
pipeline_tag: conversational
tags:
- chat
---
|
Pixel390/NEWKAYV2
|
Pixel390
| 2023-08-10T04:29:35Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:Meina/MeinaMix_V10",
"base_model:adapter:Meina/MeinaMix_V10",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-10T04:09:28Z |
---
license: creativeml-openrail-m
base_model: Meina/MeinaMix_V10
instance_prompt: a uxz girl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Pixel390/NEWKAYV2
These are LoRA adaption weights for Meina/MeinaMix_V10. The weights were trained on a uxz girl using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
|
chunwoolee0/keti-air-ke-t5-base-en-to-ko
|
chunwoolee0
| 2023-08-10T04:00:42Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:KETI-AIR/ke-t5-base",
"base_model:finetune:KETI-AIR/ke-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-08-10T03:27:30Z |
---
license: apache-2.0
base_model: KETI-AIR/ke-t5-base
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: keti-air-ke-t5-base-en-to-ko
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# keti-air-ke-t5-base-en-to-ko
This model is a fine-tuned version of [KETI-AIR/ke-t5-base](https://huggingface.co/KETI-AIR/ke-t5-base) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
yasndr/dqn-SpaceInvadersNoFrameskip-v4
|
yasndr
| 2023-08-10T03:54:03Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-10T03:53:19Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 550.50 +/- 135.14
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yasndr -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yasndr -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga yasndr
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
IHaveNoClueAndIMustPost/orca_mini_v3_13b-GGML
|
IHaveNoClueAndIMustPost
| 2023-08-10T03:52:59Z | 0 | 1 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2023-08-10T02:39:13Z |
---
language:
- en
library_name: transformers
---
orca_mini_v3_13b by [psmathur](https://huggingface.co/psmathur) in a couple of GGML formats. Please see the original model card here [here](https://huggingface.co/psmathur/orca_mini_v3_13b) for more information.
|
debjxt/tlx-bzx-btz
|
debjxt
| 2023-08-10T03:45:14Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-10T03:32:22Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### tlx_bzx_btz Dreambooth model trained by debjxt with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
dangkhoadl/AudioResNet
|
dangkhoadl
| 2023-08-10T03:21:17Z | 38 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"endpoints_compatible",
"region:us"
] | null | 2023-08-08T01:50:29Z |
# Input tensor shape
[batch_size, Cin, num_feats, num_frames]
|
jordyvl/dit-base-finetuned-rvlcdip-tiny_rvl_cdip-NK1000_kd
|
jordyvl
| 2023-08-10T03:06:31Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-27T12:16:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dit-base-finetuned-rvlcdip-tiny_rvl_cdip-NK1000_kd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dit-base-finetuned-rvlcdip-tiny_rvl_cdip-NK1000_kd
This model is a fine-tuned version of [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5815
- Accuracy: 0.8055
- Brier Loss: 0.2836
- Nll: 1.6135
- F1 Micro: 0.8055
- F1 Macro: 0.8061
- Ece: 0.0597
- Aurc: 0.0526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 125 | 1.2844 | 0.5403 | 0.5889 | 3.0582 | 0.5403 | 0.5275 | 0.0742 | 0.2209 |
| No log | 2.0 | 250 | 0.9687 | 0.655 | 0.4587 | 2.4358 | 0.655 | 0.6414 | 0.0559 | 0.1296 |
| No log | 3.0 | 375 | 0.8401 | 0.7063 | 0.4019 | 2.2308 | 0.7063 | 0.7008 | 0.0588 | 0.0990 |
| 1.234 | 4.0 | 500 | 0.8080 | 0.7145 | 0.3874 | 2.1628 | 0.7145 | 0.7163 | 0.0487 | 0.0951 |
| 1.234 | 5.0 | 625 | 0.7772 | 0.7238 | 0.3755 | 2.0380 | 0.7237 | 0.7167 | 0.0421 | 0.0914 |
| 1.234 | 6.0 | 750 | 0.7530 | 0.7498 | 0.3484 | 2.1346 | 0.7498 | 0.7464 | 0.0477 | 0.0774 |
| 1.234 | 7.0 | 875 | 0.7034 | 0.7652 | 0.3267 | 2.0596 | 0.7652 | 0.7664 | 0.0467 | 0.0678 |
| 0.3976 | 8.0 | 1000 | 0.7390 | 0.7715 | 0.3350 | 2.0568 | 0.7715 | 0.7704 | 0.0448 | 0.0763 |
| 0.3976 | 9.0 | 1125 | 0.7019 | 0.7762 | 0.3209 | 2.0168 | 0.7762 | 0.7768 | 0.0556 | 0.0769 |
| 0.3976 | 10.0 | 1250 | 0.7318 | 0.7668 | 0.3346 | 2.1148 | 0.7668 | 0.7699 | 0.0529 | 0.0792 |
| 0.3976 | 11.0 | 1375 | 0.7083 | 0.7782 | 0.3213 | 2.0671 | 0.7782 | 0.7775 | 0.0452 | 0.0756 |
| 0.1591 | 12.0 | 1500 | 0.7535 | 0.7668 | 0.3424 | 2.1407 | 0.7668 | 0.7636 | 0.0564 | 0.0845 |
| 0.1591 | 13.0 | 1625 | 0.7117 | 0.775 | 0.3288 | 2.0935 | 0.775 | 0.7766 | 0.0525 | 0.0785 |
| 0.1591 | 14.0 | 1750 | 0.6421 | 0.785 | 0.3039 | 1.9939 | 0.785 | 0.7860 | 0.0512 | 0.0643 |
| 0.1591 | 15.0 | 1875 | 0.6475 | 0.7865 | 0.3050 | 1.9301 | 0.7865 | 0.7867 | 0.0552 | 0.0636 |
| 0.1125 | 16.0 | 2000 | 0.6477 | 0.7893 | 0.3064 | 1.9442 | 0.7893 | 0.7920 | 0.0556 | 0.0684 |
| 0.1125 | 17.0 | 2125 | 0.6509 | 0.7883 | 0.3113 | 1.8957 | 0.7883 | 0.7907 | 0.0498 | 0.0710 |
| 0.1125 | 18.0 | 2250 | 0.6291 | 0.7925 | 0.3038 | 1.8697 | 0.7925 | 0.7963 | 0.0512 | 0.0677 |
| 0.1125 | 19.0 | 2375 | 0.6279 | 0.7963 | 0.2992 | 1.8155 | 0.7963 | 0.7950 | 0.0478 | 0.0647 |
| 0.095 | 20.0 | 2500 | 0.6246 | 0.7937 | 0.3008 | 1.7925 | 0.7937 | 0.7946 | 0.0595 | 0.0659 |
| 0.095 | 21.0 | 2625 | 0.6149 | 0.7953 | 0.2962 | 1.8237 | 0.7953 | 0.7951 | 0.0547 | 0.0590 |
| 0.095 | 22.0 | 2750 | 0.6196 | 0.7953 | 0.3000 | 1.8031 | 0.7953 | 0.7969 | 0.0567 | 0.0643 |
| 0.095 | 23.0 | 2875 | 0.6023 | 0.798 | 0.2932 | 1.7663 | 0.798 | 0.7983 | 0.0497 | 0.0616 |
| 0.0829 | 24.0 | 3000 | 0.6107 | 0.7943 | 0.2951 | 1.7755 | 0.7943 | 0.7958 | 0.0564 | 0.0581 |
| 0.0829 | 25.0 | 3125 | 0.5986 | 0.8015 | 0.2930 | 1.7243 | 0.8015 | 0.8027 | 0.0565 | 0.0574 |
| 0.0829 | 26.0 | 3250 | 0.5899 | 0.8005 | 0.2886 | 1.7304 | 0.8005 | 0.8021 | 0.0546 | 0.0560 |
| 0.0829 | 27.0 | 3375 | 0.5836 | 0.8023 | 0.2846 | 1.6865 | 0.8023 | 0.8024 | 0.0479 | 0.0561 |
| 0.074 | 28.0 | 3500 | 0.5824 | 0.8047 | 0.2850 | 1.6817 | 0.8047 | 0.8060 | 0.0524 | 0.0559 |
| 0.074 | 29.0 | 3625 | 0.5760 | 0.8063 | 0.2822 | 1.6505 | 0.8062 | 0.8065 | 0.0500 | 0.0546 |
| 0.074 | 30.0 | 3750 | 0.5819 | 0.8065 | 0.2843 | 1.6667 | 0.8065 | 0.8079 | 0.0563 | 0.0544 |
| 0.074 | 31.0 | 3875 | 0.5800 | 0.8045 | 0.2841 | 1.6658 | 0.8045 | 0.8059 | 0.0511 | 0.0548 |
| 0.0668 | 32.0 | 4000 | 0.5828 | 0.8053 | 0.2841 | 1.6883 | 0.8053 | 0.8054 | 0.0559 | 0.0547 |
| 0.0668 | 33.0 | 4125 | 0.5802 | 0.8037 | 0.2838 | 1.6669 | 0.8037 | 0.8038 | 0.0572 | 0.0545 |
| 0.0668 | 34.0 | 4250 | 0.5772 | 0.8067 | 0.2821 | 1.6588 | 0.8067 | 0.8083 | 0.0520 | 0.0525 |
| 0.0668 | 35.0 | 4375 | 0.5745 | 0.807 | 0.2812 | 1.6524 | 0.807 | 0.8072 | 0.0528 | 0.0528 |
| 0.0631 | 36.0 | 4500 | 0.5770 | 0.8063 | 0.2826 | 1.6433 | 0.8062 | 0.8071 | 0.0559 | 0.0528 |
| 0.0631 | 37.0 | 4625 | 0.5782 | 0.8007 | 0.2837 | 1.5953 | 0.8007 | 0.8021 | 0.0581 | 0.0541 |
| 0.0631 | 38.0 | 4750 | 0.5780 | 0.8047 | 0.2829 | 1.6275 | 0.8047 | 0.8052 | 0.0540 | 0.0521 |
| 0.0631 | 39.0 | 4875 | 0.5759 | 0.8055 | 0.2817 | 1.6162 | 0.8055 | 0.8065 | 0.0528 | 0.0529 |
| 0.0612 | 40.0 | 5000 | 0.5770 | 0.8047 | 0.2825 | 1.6131 | 0.8047 | 0.8051 | 0.0575 | 0.0524 |
| 0.0612 | 41.0 | 5125 | 0.5771 | 0.8043 | 0.2819 | 1.6015 | 0.8043 | 0.8048 | 0.0562 | 0.0519 |
| 0.0612 | 42.0 | 5250 | 0.5776 | 0.8043 | 0.2825 | 1.6152 | 0.8043 | 0.8047 | 0.0566 | 0.0527 |
| 0.0612 | 43.0 | 5375 | 0.5793 | 0.8057 | 0.2830 | 1.6196 | 0.8057 | 0.8065 | 0.0538 | 0.0527 |
| 0.06 | 44.0 | 5500 | 0.5801 | 0.8053 | 0.2835 | 1.6183 | 0.8053 | 0.8060 | 0.0618 | 0.0527 |
| 0.06 | 45.0 | 5625 | 0.5800 | 0.805 | 0.2831 | 1.6057 | 0.805 | 0.8055 | 0.0568 | 0.0530 |
| 0.06 | 46.0 | 5750 | 0.5812 | 0.805 | 0.2836 | 1.6034 | 0.805 | 0.8056 | 0.0577 | 0.0529 |
| 0.06 | 47.0 | 5875 | 0.5809 | 0.805 | 0.2834 | 1.6164 | 0.805 | 0.8056 | 0.0580 | 0.0526 |
| 0.0593 | 48.0 | 6000 | 0.5810 | 0.8057 | 0.2834 | 1.6108 | 0.8057 | 0.8064 | 0.0617 | 0.0525 |
| 0.0593 | 49.0 | 6125 | 0.5812 | 0.8053 | 0.2836 | 1.6140 | 0.8053 | 0.8058 | 0.0570 | 0.0527 |
| 0.0593 | 50.0 | 6250 | 0.5815 | 0.8055 | 0.2836 | 1.6135 | 0.8055 | 0.8061 | 0.0597 | 0.0526 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
sshh12/sdxl-lora-pokemon
|
sshh12
| 2023-08-10T03:05:05Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-07T02:37:13Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: lambdalabs/pokemon-blip-captions
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: false
---
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
| | | | |
| ------------------------------------------------------- | ---------------------------------------------------------------------------------- | ----------------------------------------------------- | ------------------------------------------------------------- |
|  |  |  |  |
## 🧨 Diffusers Usage
```py
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", vae=vae, torch_dtype=torch.float16, variant="fp16", use_safetensors=True)
pipe.load_lora_weights("sshh12/sdxl-lora-pokemon")
pipe.to("cuda")
prompt = "..."
image = pipe(prompt=prompt).images[0]
image
```
## Training
```py
MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
DATASET_NAME="lambdalabs/pokemon-blip-captions"
!accelerate launch train_text_to_image_lora_sdxl.py \
--pretrained_model_name_or_path="$MODEL_NAME" \
--pretrained_vae_model_name_or_path="madebyollin/sdxl-vae-fp16-fix" \
--dataset_name="$DATASET_NAME" \
--caption_column="text" \
--resolution=1024 \
--random_flip \
--mixed_precision="fp16" \
--use_8bit_adam \
--train_batch_size=1 \
--gradient_accumulation_steps=8 \
--num_train_epochs=200 \
--checkpointing_steps=500 \
--learning_rate=1e-04 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--seed=0 \
--validation_prompt="cute dragon creature" \
--enable_xformers_memory_efficient_attention \
--report_to="wandb"
```
|
rriverar75/vit-model
|
rriverar75
| 2023-08-10T02:34:32Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-10T02:08:37Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
widget:
- src: >-
https://huggingface.co/rriverar75/vit-model/resolve/main/healthy.jpeg
example_title: Healthy
- src: >-
https://huggingface.co/rriverar75/vit-model/resolve/main/bean_rust.jpeg
example_title: Bean Rust
model-index:
- name: vit-model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0189
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1527 | 3.85 | 500 | 0.0189 | 1.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Pixel390/NEWKAY
|
Pixel390
| 2023-08-10T02:29:53Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:Meina/MeinaMix_V10",
"base_model:adapter:Meina/MeinaMix_V10",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-10T02:12:50Z |
---
license: creativeml-openrail-m
base_model: Meina/MeinaMix_V10
instance_prompt: a uxz green haired girl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Pixel390/NEWKAY
These are LoRA adaption weights for Meina/MeinaMix_V10. The weights were trained on a uxz green haired girl using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
|
dtthanh/llama-2-7b-und-lora-2.7
|
dtthanh
| 2023-08-10T02:20:10Z | 3 | 1 |
peft
|
[
"peft",
"vi",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2023-08-06T10:41:24Z |
---
library_name: peft
license: cc-by-sa-4.0
language:
- vi
---
### Adapter info
-
This is an Lora adapter using dataset contains only 360 Vietnamese sentences and the "text" column in a format like:
-
```python
> \<s\>\[INST\] "Bạn bè có phúc cùng chia."\[\/INST\] Bạn bè có phúc cùng chia. Có họa trốn sạch chạy đi phương nào? Tay trắng làm nên… mấy chục ngàn bạc nợ. \<\/s\>
or
> \<s\>\[INST\] Ai bảo chăn trâu là khổ. \[\/INST\] Ai bảo chăn trâu là khổ. Tôi chăn chồng còn khổ hơn trâu. Trâu đi trâu biêt đường về. Chồng đi không biết dường về như trâu. \<\/s\>
## Training procedure
-
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Usage
-
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaTokenizer, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer
model_name = "NousResearch/llama-2-7b-chat-hf"
adapters_name = "dtthanh/llama-2-7b-und-lora-2.7"
print(f"Starting to load the model {model_name} into memory")
m = AutoModelForCausalLM.from_pretrained(
model_name,
# base_model_name_or_path # NousResearch/llama-2-7b-chat-hf
#load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map={"": 0}
)
m = PeftModel.from_pretrained(m, adapters_name)
m = m.merge_and_unload()
tok = AutoTokenizer.from_pretrained(model_name)
tok.pad_token_id = 18610 # _***
print(f"Successfully loaded the model {model_name} into memory")
- PEFT 0.4.0
|
ScottShao/llama2-7b-200steps-finetunined-sxl
|
ScottShao
| 2023-08-10T02:11:23Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-10T02:11:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
wangxso/q-FrozenLake-v1-4x4-noSlippery
|
wangxso
| 2023-08-10T02:01:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-10T02:01:55Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="wangxso/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
TheTravellingEngineer/bloom-1b1-RLHF-v2
|
TheTravellingEngineer
| 2023-08-10T01:39:33Z | 1,662 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-10T01:30:21Z |
The base model is bigscience/bloom-1b1. It was finetuned using RLHF and the dataset and the model prompt is similar to the original model.
This repo contains the merged fp16 model.
**Legal Disclaimer: This model is bound by the usage restrictions of the original BLOOM model. And comes with no warranty or gurantees of any kind.**
---
- license:
- bigscience-bloom-rail-1.0 <br>
- datasets:
- Anthropic/hh-rlhf <br>
- language:
- en <br>
- reference: https://github.com/hiyouga/LLaMA-Efficient-Tuning/tree/main
---
|
tianpf/llama2-qlora-finetunined-law
|
tianpf
| 2023-08-10T01:38:54Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-10T01:38:51Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
jaykei/Zuko
|
jaykei
| 2023-08-10T01:17:21Z | 0 | 1 | null |
[
"en",
"license:openrail",
"region:us"
] | null | 2023-07-05T05:16:36Z |
---
license: openrail
language:
- en
---
|
dana11235/ppo-Huggy
|
dana11235
| 2023-08-10T01:16:01Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-10T01:15:51Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: dana11235/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
toastedshibe/lora-trained-xl-colab
|
toastedshibe
| 2023-08-10T01:04:48Z | 5 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-09T23:49:50Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - toastedshibe/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
junikeda/Reinforce-PolicyGradient-CartPole-v1
|
junikeda
| 2023-08-10T01:01:08Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-10T01:00:57Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PolicyGradient-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Jiuzhouh/flan-t5-xxl-lora-t2g-webnlg
|
Jiuzhouh
| 2023-08-10T00:35:01Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-10T00:34:49Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
Caiquejajaja/Sla
|
Caiquejajaja
| 2023-08-10T00:28:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-10T00:27:45Z |
git lfs install
git clone https://huggingface.co/facebook/bart-large-mnli
|
allenbc/q-FrozenLake-v1-4x4-noSlippery
|
allenbc
| 2023-08-10T00:26:14Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-10T00:26:09Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="allenbc/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
asenella/mhd_config_1_MMVAE_beta_5_scale_True_seed_1
|
asenella
| 2023-08-10T00:03:06Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-10T00:02:56Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Pixel390/GIRLKAY
|
Pixel390
| 2023-08-09T23:53:42Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:Meina/MeinaMix_V10",
"base_model:adapter:Meina/MeinaMix_V10",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-09T23:09:34Z |
---
license: creativeml-openrail-m
base_model: Meina/MeinaMix_V10
instance_prompt: a uxz girl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Pixel390/GIRLKAY
These are LoRA adaption weights for Meina/MeinaMix_V10. The weights were trained on a uxz girl using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: True.
|
tingchih/pretrain_doc_concat
|
tingchih
| 2023-08-09T23:38:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-31T05:04:43Z |
This is a pre-train baseline model for summarization. Input is to concatenate all articles in one cluster.
the example.json is the example result.
pipeline:
input -> sum tokenizer -> perceiver -> sum model -> summary
|
good-gaming/distilbert-base-uncased-finetuned-emotion
|
good-gaming
| 2023-08-09T23:21:58Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T22:48:26Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9272353554627635
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2133
- Accuracy: 0.927
- F1: 0.9272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8118 | 1.0 | 250 | 0.3108 | 0.905 | 0.9056 |
| 0.2485 | 2.0 | 500 | 0.2133 | 0.927 | 0.9272 |
### Framework versions
- Transformers 4.31.0
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.13.3
|
omersen/omer_trained_model
|
omersen
| 2023-08-09T23:16:45Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T22:51:44Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - omersen/omer_trained_model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
knvarad/t5
|
knvarad
| 2023-08-09T22:41:08Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-08T23:29:01Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model-varad1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model-varad1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8679
- Validation Loss: 3.5523
- Train Rougel: tf.Tensor(0.11994212, shape=(), dtype=float32)
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rougel | Epoch |
|:----------:|:---------------:|:----------------------------------------------:|:-----:|
| 3.8679 | 3.5523 | tf.Tensor(0.11994212, shape=(), dtype=float32) | 0 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.10.1
- Datasets 2.13.1
- Tokenizers 0.12.1
|
theojolliffe/flan-recipes
|
theojolliffe
| 2023-08-09T22:39:32Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-09T22:03:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-recipes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-recipes
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 71.0741
- Rouge2: 34.937
- Rougel: 71.129
- Rougelsum: 71.0758
- Gen Len: 4.0103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 873 | nan | 71.0741 | 34.937 | 71.129 | 71.0758 | 4.0103 |
| 0.0 | 2.0 | 1746 | nan | 71.0741 | 34.937 | 71.129 | 71.0758 | 4.0103 |
| 0.0 | 3.0 | 2619 | nan | 71.0741 | 34.937 | 71.129 | 71.0758 | 4.0103 |
| 0.0 | 4.0 | 3492 | nan | 71.0741 | 34.937 | 71.129 | 71.0758 | 4.0103 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
gang21/llama2-icd10-peft
|
gang21
| 2023-08-09T22:33:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T22:05:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
jucaro/donut-base-sroie
|
jucaro
| 2023-08-09T22:19:48Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-09T19:07:50Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
agustinl/ppo-SnowballTarget
|
agustinl
| 2023-08-09T22:18:40Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-09T22:18:36Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: agustinl/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sergeindamix/anciano_pendejo
|
sergeindamix
| 2023-08-09T22:11:22Z | 2 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-08-09T22:11:17Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks person
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Test enoder was not trained.
|
rizquuula/RoBERTa-IndoSQuADv2_1691592486-16-2e-05-0.01-5
|
rizquuula
| 2023-08-09T22:04:20Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-09T14:51:09Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: RoBERTa-IndoSQuADv2_1691592486-16-2e-05-0.01-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-IndoSQuADv2_1691592486-16-2e-05-0.01-5
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1516
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2457 | 1.0 | 8145 | 2.1159 |
| 1.7442 | 2.0 | 16290 | 2.0275 |
| 1.4963 | 3.0 | 24435 | 2.0147 |
| 1.301 | 4.0 | 32580 | 2.0607 |
| 1.1569 | 5.0 | 40725 | 2.1516 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
ittailup/lallama-13b-alpha
|
ittailup
| 2023-08-09T21:56:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:finetune:meta-llama/Llama-2-13b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-07T21:10:18Z |
---
base_model: meta-llama/Llama-2-13b-hf
tags:
- generated_from_trainer
model-index:
- name: lallama-13b-alpha
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lallama-13b-alpha
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
muhtasham/bert_uncased_L-2_H-128_A-2-finetuned-emotion-finetuned-tweet
|
muhtasham
| 2023-08-09T21:39:47Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-16T16:28:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: bert_uncased_L-2_H-128_A-2-finetuned-emotion-finetuned-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87168
- name: F1
type: f1
value: 0.8716747437975058
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-2_H-128_A-2-finetuned-emotion-finetuned-tweet
This model is a fine-tuned version of [muhtasham/bert_uncased_L-2_H-128_A-2-finetuned-emotion](https://huggingface.co/muhtasham/bert_uncased_L-2_H-128_A-2-finetuned-emotion) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4004
- Accuracy: 0.8717
- F1: 0.8717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4751 | 1.28 | 500 | 0.3880 | 0.828 | 0.8277 |
| 0.3453 | 2.56 | 1000 | 0.3282 | 0.8608 | 0.8607 |
| 0.2973 | 3.84 | 1500 | 0.3140 | 0.8695 | 0.8695 |
| 0.26 | 5.12 | 2000 | 0.3154 | 0.8736 | 0.8735 |
| 0.2218 | 6.39 | 2500 | 0.3144 | 0.8756 | 0.8756 |
| 0.1977 | 7.67 | 3000 | 0.3197 | 0.876 | 0.8760 |
| 0.1656 | 8.95 | 3500 | 0.3526 | 0.8737 | 0.8735 |
| 0.1404 | 10.23 | 4000 | 0.3865 | 0.8691 | 0.8689 |
| 0.121 | 11.51 | 4500 | 0.4004 | 0.8717 | 0.8717 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
mandeepbagga/llama-2-13b-infyGPT
|
mandeepbagga
| 2023-08-09T21:38:55Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T21:38:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
omersen/path-to-save-model
|
omersen
| 2023-08-09T21:29:59Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T20:58:14Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - omersen/path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.