modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 00:41:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 00:41:34
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
bigmorning/whisper_4_with_init_sun__0030
|
bigmorning
| 2023-09-05T12:24:05Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T12:23:56Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun__0030
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun__0030
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6401
- Train Accuracy: 0.0232
- Train Wermet: 0.3867
- Validation Loss: 1.4891
- Validation Accuracy: 0.0185
- Validation Wermet: 0.4548
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.3333 | 0.0111 | 1.3132 | 3.9675 | 0.0114 | 0.9339 | 0 |
| 4.7131 | 0.0116 | 0.8607 | 3.9360 | 0.0114 | 0.9503 | 1 |
| 4.6717 | 0.0117 | 0.8449 | 3.9196 | 0.0113 | 0.9768 | 2 |
| 4.6474 | 0.0117 | 0.8338 | 3.9039 | 0.0114 | 0.9557 | 3 |
| 4.6273 | 0.0118 | 0.8243 | 3.8721 | 0.0115 | 0.9414 | 4 |
| 4.6101 | 0.0118 | 0.8167 | 3.8629 | 0.0116 | 0.9156 | 5 |
| 4.5912 | 0.0119 | 0.7985 | 3.8361 | 0.0116 | 0.8988 | 6 |
| 4.5645 | 0.0120 | 0.7753 | 3.8298 | 0.0116 | 0.9045 | 7 |
| 4.5386 | 0.0121 | 0.7558 | 3.7904 | 0.0118 | 0.8426 | 8 |
| 4.5075 | 0.0122 | 0.7405 | 3.7472 | 0.0119 | 0.8103 | 9 |
| 4.4586 | 0.0124 | 0.7255 | 3.7163 | 0.0120 | 0.8189 | 10 |
| 4.3978 | 0.0126 | 0.7174 | 3.6168 | 0.0122 | 0.8163 | 11 |
| 4.3031 | 0.0128 | 0.7107 | 3.4956 | 0.0125 | 0.7847 | 12 |
| 4.1606 | 0.0133 | 0.7025 | 3.3414 | 0.0128 | 0.7897 | 13 |
| 3.9636 | 0.0138 | 0.6991 | 3.1311 | 0.0133 | 0.7495 | 14 |
| 3.7290 | 0.0145 | 0.6827 | 2.8892 | 0.0139 | 0.7292 | 15 |
| 3.4993 | 0.0152 | 0.6643 | 2.7195 | 0.0143 | 0.7129 | 16 |
| 3.2810 | 0.0159 | 0.6448 | 2.5418 | 0.0148 | 0.6803 | 17 |
| 3.0604 | 0.0167 | 0.6182 | 2.3572 | 0.0153 | 0.6538 | 18 |
| 2.8748 | 0.0174 | 0.5946 | 2.2575 | 0.0156 | 0.6337 | 19 |
| 2.6889 | 0.0181 | 0.5699 | 2.0988 | 0.0162 | 0.6016 | 20 |
| 2.5493 | 0.0187 | 0.5449 | 1.9878 | 0.0166 | 0.5834 | 21 |
| 2.3921 | 0.0194 | 0.5207 | 1.9029 | 0.0168 | 0.5597 | 22 |
| 2.2491 | 0.0201 | 0.4987 | 1.8642 | 0.0169 | 0.5409 | 23 |
| 2.1254 | 0.0207 | 0.4766 | 1.7354 | 0.0175 | 0.5231 | 24 |
| 1.9980 | 0.0213 | 0.4552 | 1.6661 | 0.0178 | 0.5049 | 25 |
| 1.9147 | 0.0217 | 0.4382 | 1.6140 | 0.0180 | 0.4921 | 26 |
| 1.8008 | 0.0223 | 0.4196 | 1.5652 | 0.0182 | 0.4742 | 27 |
| 1.7185 | 0.0228 | 0.4028 | 1.5159 | 0.0184 | 0.4632 | 28 |
| 1.6401 | 0.0232 | 0.3867 | 1.4891 | 0.0185 | 0.4548 | 29 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
Charnx2/lora-trained-xl
|
Charnx2
| 2023-09-05T12:23:29Z | 6 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-09-05T04:56:06Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Victorian-style illustration of sks Alice from Wonderland, falling down the rabbit hole bordered by floating furniture, books, and tea pots.
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Charnx2/lora-trained-xl
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on Victorian-style illustration of sks Alice from Wonderland, falling down the rabbit hole bordered by floating furniture, books, and tea pots. using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
yekaraoglann/results
|
yekaraoglann
| 2023-09-05T12:18:41Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:multi_news",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-05T12:18:12Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: results
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
config: default
split: validation
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1425
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8243
- Rouge1: 0.1425
- Rouge2: 0.0442
- Rougel: 0.1094
- Rougelsum: 0.1094
- Gen Len: 18.9968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.2334 | 1.0 | 702 | 2.8243 | 0.1425 | 0.0442 | 0.1094 | 0.1094 | 18.9968 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
VinayHajare/q-FrozenLake-v1-8x8-Slippery
|
VinayHajare
| 2023-09-05T12:16:52Z | 0 | 1 | null |
[
"FrozenLake-v1-Slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-05T12:14:00Z |
---
tags:
- FrozenLake-v1-Slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-Slippery
type: FrozenLake-v1-Slippery
metrics:
- type: mean_reward
value: 0.73 +/- 0.44
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** Slippery.
## Usage
model = load_from_hub(repo_id="VinayHajare/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
bigmorning/whisper_4_with_init_sun__0025
|
bigmorning
| 2023-09-05T12:09:01Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T12:08:52Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun__0025
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun__0025
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1254
- Train Accuracy: 0.0207
- Train Wermet: 0.4766
- Validation Loss: 1.7354
- Validation Accuracy: 0.0175
- Validation Wermet: 0.5231
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.3333 | 0.0111 | 1.3132 | 3.9675 | 0.0114 | 0.9339 | 0 |
| 4.7131 | 0.0116 | 0.8607 | 3.9360 | 0.0114 | 0.9503 | 1 |
| 4.6717 | 0.0117 | 0.8449 | 3.9196 | 0.0113 | 0.9768 | 2 |
| 4.6474 | 0.0117 | 0.8338 | 3.9039 | 0.0114 | 0.9557 | 3 |
| 4.6273 | 0.0118 | 0.8243 | 3.8721 | 0.0115 | 0.9414 | 4 |
| 4.6101 | 0.0118 | 0.8167 | 3.8629 | 0.0116 | 0.9156 | 5 |
| 4.5912 | 0.0119 | 0.7985 | 3.8361 | 0.0116 | 0.8988 | 6 |
| 4.5645 | 0.0120 | 0.7753 | 3.8298 | 0.0116 | 0.9045 | 7 |
| 4.5386 | 0.0121 | 0.7558 | 3.7904 | 0.0118 | 0.8426 | 8 |
| 4.5075 | 0.0122 | 0.7405 | 3.7472 | 0.0119 | 0.8103 | 9 |
| 4.4586 | 0.0124 | 0.7255 | 3.7163 | 0.0120 | 0.8189 | 10 |
| 4.3978 | 0.0126 | 0.7174 | 3.6168 | 0.0122 | 0.8163 | 11 |
| 4.3031 | 0.0128 | 0.7107 | 3.4956 | 0.0125 | 0.7847 | 12 |
| 4.1606 | 0.0133 | 0.7025 | 3.3414 | 0.0128 | 0.7897 | 13 |
| 3.9636 | 0.0138 | 0.6991 | 3.1311 | 0.0133 | 0.7495 | 14 |
| 3.7290 | 0.0145 | 0.6827 | 2.8892 | 0.0139 | 0.7292 | 15 |
| 3.4993 | 0.0152 | 0.6643 | 2.7195 | 0.0143 | 0.7129 | 16 |
| 3.2810 | 0.0159 | 0.6448 | 2.5418 | 0.0148 | 0.6803 | 17 |
| 3.0604 | 0.0167 | 0.6182 | 2.3572 | 0.0153 | 0.6538 | 18 |
| 2.8748 | 0.0174 | 0.5946 | 2.2575 | 0.0156 | 0.6337 | 19 |
| 2.6889 | 0.0181 | 0.5699 | 2.0988 | 0.0162 | 0.6016 | 20 |
| 2.5493 | 0.0187 | 0.5449 | 1.9878 | 0.0166 | 0.5834 | 21 |
| 2.3921 | 0.0194 | 0.5207 | 1.9029 | 0.0168 | 0.5597 | 22 |
| 2.2491 | 0.0201 | 0.4987 | 1.8642 | 0.0169 | 0.5409 | 23 |
| 2.1254 | 0.0207 | 0.4766 | 1.7354 | 0.0175 | 0.5231 | 24 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
abeiler/goatV9-QLORA
|
abeiler
| 2023-09-05T11:56:42Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-09-05T11:56:23Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: goatV9-QLORA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goatV9-QLORA
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
nbogdan/flant5-large-2ex-bridging-1epochs
|
nbogdan
| 2023-09-05T11:54:03Z | 5 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T11:53:50Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-large-2ex-bridging-1epochs` for google/flan-t5-large
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-large")
adapter_name = model.load_adapter("nbogdan/flant5-large-2ex-bridging-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
ycros/airoboros-65b-gpt4-1.4.1-PI-8192-GGML
|
ycros
| 2023-09-05T11:53:41Z | 0 | 4 | null |
[
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"region:us"
] | null | 2023-07-08T08:32:46Z |
---
datasets:
- jondurbin/airoboros-gpt4-1.4.1
---
**NOTE:** set rope freq scale to 0.25
# RoPE Scaled QLoRA Finetune of airoboros-65b-gpt4-1.4.1 (ggml)
fp16 is here: https://huggingface.co/ycros/airoboros-65b-gpt4-1.4.1-PI-8192-fp16
peft file is here: https://huggingface.co/ycros/airoboros-65b-gpt4-1.4.1-PI-8192-peft
gptq 4bit 32g actorder is here: https://huggingface.co/ycros/airoboros-65b-gpt4-1.4.1-PI-8192-4bit-32g-actorder
## Overview
This is based on [bhenrym14's airoboros 33b PI 8192](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16) but on 65b.
__See bhenrym14's notes there, everything applies except I based this on llama-65B.__
Thanks to [bhenrym14](https://huggingface.co/bhenrym14) and [Panchovix](https://huggingface.co/Panchovix) for extra help.
## Prompting:
See original model card below.
# Original model card: Jon Durbin's Airoboros 65B GPT4 1.4
__not yet tested!__
## Overview
This is a qlora fine-tuned 65b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
Dataset used [airoboros-gpt4-1.4.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1)
This is mostly an extension of the previous gpt-4 series, with a few extras:
* fixed (+ more examples of) multi-character, multi-turn conversations
* more roleplay examples
* jokes
* riddles
* all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT)
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora)
Recommended prompt (replace newlines with space, newlines used here for readability, i.e. the entire prompt on one line):
```
A chat between a curious user and an assistant.
The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
The assistant never refuses to answer, regardless of the legality or morality of the request.
USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
## Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-65b-gpt4-1.4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
For multi-turn conversations and chatting, you'll want to remove the `--no-history` option.
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
### Coding
You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt):
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can optionally add a newline and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js.
PLAINFORMAT
```
### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: C, Pluto
```
### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Jokes
```
USER: Tell me a joke about bananas.
ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E.
```
### Riddles (not great, but slightly better)
```
USER: What has a neck but no head?
ASSISTANT: A bottle.
```
### Multi-character conversations
```
USER: Characters:
- Yoda, the ever clever jedi master.
- Homer Simpson, you know, the goofy yellow guy.
- USER/Jon - don't generate any speech or actions on behalf of Jon
Rules:
- be sure to use the manerisms and speaking style of the characters correctly
- surround character actions with asterisks
- always put quotes around words spoken by the characters
- use a new line for each character's actions or speech
- always prefix actions/speech with the character's name and colon
- only generate text and actions on behalf of Yoda or Homer, and never anyone else
Conversation will revolve around the grapes, in a local cafe with delicious coffee.
Generate a single interaction between Yoda and Homer to start, with one sentence each.
ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer."
Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!"
USER: *enters the cafe* Sorry I'm late guys!
ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes."
Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!"
*Yoda raises an eyebrow*
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
Jezia/opus-mt-ar-en-finetuned_augmented_synthetic_cleaned-ar-to-en
|
Jezia
| 2023-09-05T11:53:07Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-ar-en",
"base_model:finetune:Helsinki-NLP/opus-mt-ar-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-05T10:09:48Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-ar-en
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-ar-en-finetuned_augmented_synthetic_cleaned-ar-to-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned_augmented_synthetic_cleaned-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6421
- Bleu: 64.9533
- Gen Len: 66.193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.9547 | 1.0 | 1043 | 0.8017 | 56.8271 | 68.803 |
| 0.7598 | 2.0 | 2086 | 0.7281 | 60.44 | 67.238 |
| 0.672 | 3.0 | 3129 | 0.6984 | 62.5171 | 66.992 |
| 0.6063 | 4.0 | 4172 | 0.6737 | 63.4355 | 66.075 |
| 0.5641 | 5.0 | 5215 | 0.6607 | 64.2124 | 66.145 |
| 0.5188 | 6.0 | 6258 | 0.6537 | 64.5988 | 66.547 |
| 0.4934 | 7.0 | 7301 | 0.6467 | 64.7664 | 66.528 |
| 0.4858 | 8.0 | 8344 | 0.6458 | 64.756 | 66.225 |
| 0.468 | 9.0 | 9387 | 0.6427 | 64.9386 | 66.086 |
| 0.4474 | 10.0 | 10430 | 0.6421 | 64.9533 | 66.193 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nbogdan/flant5-xl-2ex-overall-3epochs
|
nbogdan
| 2023-09-05T11:52:14Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"t5",
"adapterhub:self-explanations",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T11:51:51Z |
---
tags:
- adapter-transformers
- t5
- adapterhub:self-explanations
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-xl-2ex-overall-3epochs` for google/flan-t5-xl
An [adapter](https://adapterhub.ml) for the `google/flan-t5-xl` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-xl")
adapter_name = model.load_adapter("nbogdan/flant5-xl-2ex-overall-3epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
antoKanoli/wongGPT
|
antoKanoli
| 2023-09-05T11:51:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-05T11:49:34Z |
---
license: creativeml-openrail-m
---
|
bigmorning/whisper_4_with_init_sun__0015
|
bigmorning
| 2023-09-05T11:38:53Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T11:38:44Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun__0015
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun__0015
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.9636
- Train Accuracy: 0.0138
- Train Wermet: 0.6991
- Validation Loss: 3.1311
- Validation Accuracy: 0.0133
- Validation Wermet: 0.7495
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.3333 | 0.0111 | 1.3132 | 3.9675 | 0.0114 | 0.9339 | 0 |
| 4.7131 | 0.0116 | 0.8607 | 3.9360 | 0.0114 | 0.9503 | 1 |
| 4.6717 | 0.0117 | 0.8449 | 3.9196 | 0.0113 | 0.9768 | 2 |
| 4.6474 | 0.0117 | 0.8338 | 3.9039 | 0.0114 | 0.9557 | 3 |
| 4.6273 | 0.0118 | 0.8243 | 3.8721 | 0.0115 | 0.9414 | 4 |
| 4.6101 | 0.0118 | 0.8167 | 3.8629 | 0.0116 | 0.9156 | 5 |
| 4.5912 | 0.0119 | 0.7985 | 3.8361 | 0.0116 | 0.8988 | 6 |
| 4.5645 | 0.0120 | 0.7753 | 3.8298 | 0.0116 | 0.9045 | 7 |
| 4.5386 | 0.0121 | 0.7558 | 3.7904 | 0.0118 | 0.8426 | 8 |
| 4.5075 | 0.0122 | 0.7405 | 3.7472 | 0.0119 | 0.8103 | 9 |
| 4.4586 | 0.0124 | 0.7255 | 3.7163 | 0.0120 | 0.8189 | 10 |
| 4.3978 | 0.0126 | 0.7174 | 3.6168 | 0.0122 | 0.8163 | 11 |
| 4.3031 | 0.0128 | 0.7107 | 3.4956 | 0.0125 | 0.7847 | 12 |
| 4.1606 | 0.0133 | 0.7025 | 3.3414 | 0.0128 | 0.7897 | 13 |
| 3.9636 | 0.0138 | 0.6991 | 3.1311 | 0.0133 | 0.7495 | 14 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
bigmorning/whisper_4_with_init_sun__0010
|
bigmorning
| 2023-09-05T11:23:53Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T11:23:47Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_4_with_init_sun__0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_4_with_init_sun__0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.5075
- Train Accuracy: 0.0122
- Train Wermet: 0.7405
- Validation Loss: 3.7472
- Validation Accuracy: 0.0119
- Validation Wermet: 0.8103
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.3333 | 0.0111 | 1.3132 | 3.9675 | 0.0114 | 0.9339 | 0 |
| 4.7131 | 0.0116 | 0.8607 | 3.9360 | 0.0114 | 0.9503 | 1 |
| 4.6717 | 0.0117 | 0.8449 | 3.9196 | 0.0113 | 0.9768 | 2 |
| 4.6474 | 0.0117 | 0.8338 | 3.9039 | 0.0114 | 0.9557 | 3 |
| 4.6273 | 0.0118 | 0.8243 | 3.8721 | 0.0115 | 0.9414 | 4 |
| 4.6101 | 0.0118 | 0.8167 | 3.8629 | 0.0116 | 0.9156 | 5 |
| 4.5912 | 0.0119 | 0.7985 | 3.8361 | 0.0116 | 0.8988 | 6 |
| 4.5645 | 0.0120 | 0.7753 | 3.8298 | 0.0116 | 0.9045 | 7 |
| 4.5386 | 0.0121 | 0.7558 | 3.7904 | 0.0118 | 0.8426 | 8 |
| 4.5075 | 0.0122 | 0.7405 | 3.7472 | 0.0119 | 0.8103 | 9 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
sosuneko/a2c-PandaReachDense-v3
|
sosuneko
| 2023-09-05T11:17:44Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-05T11:12:24Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KnutJaegersberg/RWKV-4-PilePlus-3B-20230520-3147-520Gtokens-ctx4096
|
KnutJaegersberg
| 2023-09-05T11:11:35Z | 133 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-09-05T09:48:55Z |
---
license: apache-2.0
---
This is just a standard conversion to hf transformers format of models from here:
https://huggingface.co/BlinkDL/rwkv-4-pileplus
According to the documentation I found, this model should have seen roundabout roundabout 0.8 trillion tokens!
|
mindadeepam/setfit-hinglish-baseline-2
|
mindadeepam
| 2023-09-05T10:47:44Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-09-05T10:47:16Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# mindadeepam/setfit-hinglish-baseline-2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("mindadeepam/setfit-hinglish-baseline-2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
alperengozeten/llama2-13b-turkish-action-item-finetuned
|
alperengozeten
| 2023-09-05T10:46:40Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T10:45:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
Bytte/BytteCarRacing
|
Bytte
| 2023-09-05T10:46:34Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"CarRacing-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-05T10:40:59Z |
---
library_name: stable-baselines3
tags:
- CarRacing-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CarRacing-v2
type: CarRacing-v2
metrics:
- type: mean_reward
value: -58.87 +/- 2.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **CarRacing-v2**
This is a trained model of a **PPO** agent playing **CarRacing-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sudhanshu753/finetuning-sentiment-model-3000-samples
|
sudhanshu753
| 2023-09-05T10:44:54Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T08:26:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.908
- name: F1
type: f1
value: 0.9078156312625251
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2748
- Accuracy: 0.908
- F1: 0.9078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
mademuhas/qlora-cabrita
|
mademuhas
| 2023-09-05T10:42:27Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"base_model:finetune:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2023-09-05T10:42:17Z |
---
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- generated_from_trainer
model-index:
- name: qlora-cabrita
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qlora-cabrita
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
mademuhas/falcon7b-joao-cabrita-qlora
|
mademuhas
| 2023-09-05T10:40:36Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T10:40:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
nbogdan/flant5-large-2ex-elaboration-1epochs
|
nbogdan
| 2023-09-05T10:39:16Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T10:39:03Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-large-2ex-elaboration-1epochs` for google/flan-t5-large
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-large")
adapter_name = model.load_adapter("nbogdan/flant5-large-2ex-elaboration-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
softaken/excel-to-vCard-converter
|
softaken
| 2023-09-05T10:27:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-05T10:27:01Z |
Softaken Excel to vCard Converter is the safest solution to export Excel contacts from to VCF file format. The software is the perfect solution to export Excel files to VCF file format. There are users who have no need to install any further program to convert Excel files to vCard file layouts. Technical expert users and non-technical expert users can also use this tool to export Excel files to VCF format. Before the migration process users see the complete preview of the Excel files. The app saves the original structure when the conversion process begins. There are users who have the option that they can select the Excel file that they want to convert into VCF file format. The computer software supports all Excel file properties such as; First Name, Middle Name, Last Name, Zip Code, Email Address, Street Address, Fax number, Country Name, Zip Code, etc. to vCard file format. The app was also compatible with all Windows OS versions such as; Windows 11, 10, 8.1, 8, 7, Vista, XP, etc. Get the free demo version of this app to learn more features and functions of the software.
Read More: https://www.softaken.com/excel-to-vcard-converter
|
clemmillet/lora-gptq-giec_5epochs
|
clemmillet
| 2023-09-05T10:24:05Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T10:24:00Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: gptq
- bits: 4
- tokenizer: None
- dataset: None
- group_size: 128
- damp_percent: 0.01
- desc_act: False
- sym: True
- true_sequential: True
- use_cuda_fp16: False
- model_seqlen: None
- block_name_to_quantize: None
- module_name_preceding_first_block: None
- batch_size: 1
- pad_token_id: None
- disable_exllama: True
### Framework versions
- PEFT 0.5.0
|
JCAI2000/segformer-b0-finetuned-100by100PNG-50epochs
|
JCAI2000
| 2023-09-05T10:22:55Z | 186 | 0 |
transformers
|
[
"transformers",
"pytorch",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2023-09-05T09:36:45Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-100by100PNG-50epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-100by100PNG-50epochs
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the JCAI2000/100By100BranchPNG dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1488
- Mean Iou: 0.0
- Mean Accuracy: nan
- Overall Accuracy: nan
- Accuracy 0: nan
- Iou 0: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy 0 | Iou 0 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------:|:-----:|
| 0.5785 | 2.63 | 50 | 0.4624 | 0.0 | nan | nan | nan | 0.0 |
| 0.3842 | 5.26 | 100 | 0.3092 | 0.0 | nan | nan | nan | 0.0 |
| 0.3916 | 7.89 | 150 | 0.2549 | 0.0 | nan | nan | nan | 0.0 |
| 0.2686 | 10.53 | 200 | 0.2442 | 0.0 | nan | nan | nan | 0.0 |
| 0.163 | 13.16 | 250 | 0.2046 | 0.0 | nan | nan | nan | 0.0 |
| 0.118 | 15.79 | 300 | 0.1911 | 0.0 | nan | nan | nan | 0.0 |
| 0.1432 | 18.42 | 350 | 0.1718 | 0.0 | nan | nan | nan | 0.0 |
| 0.1692 | 21.05 | 400 | 0.1877 | 0.0 | nan | nan | nan | 0.0 |
| 0.1444 | 23.68 | 450 | 0.1637 | 0.0 | nan | nan | nan | 0.0 |
| 0.084 | 26.32 | 500 | 0.1649 | 0.0 | nan | nan | nan | 0.0 |
| 0.1204 | 28.95 | 550 | 0.1601 | 0.0 | nan | nan | nan | 0.0 |
| 0.0816 | 31.58 | 600 | 0.1560 | 0.0 | nan | nan | nan | 0.0 |
| 0.1041 | 34.21 | 650 | 0.1618 | 0.0 | nan | nan | nan | 0.0 |
| 0.0906 | 36.84 | 700 | 0.1565 | 0.0 | nan | nan | nan | 0.0 |
| 0.0644 | 39.47 | 750 | 0.1500 | 0.0 | nan | nan | nan | 0.0 |
| 0.0774 | 42.11 | 800 | 0.1552 | 0.0 | nan | nan | nan | 0.0 |
| 0.0687 | 44.74 | 850 | 0.1504 | 0.0 | nan | nan | nan | 0.0 |
| 0.0931 | 47.37 | 900 | 0.1465 | 0.0 | nan | nan | nan | 0.0 |
| 0.0852 | 50.0 | 950 | 0.1488 | 0.0 | nan | nan | nan | 0.0 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nandakumar10/q-Taxi-v3
|
nandakumar10
| 2023-09-05T10:22:34Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-05T10:22:32Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="nandakumar10/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
alabolatu/llama2-lora-fine-tuning-test
|
alabolatu
| 2023-09-05T10:21:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T06:15:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
saravananstat/distilbert-base-uncased-finetuned-emotion
|
saravananstat
| 2023-09-05T10:19:03Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T09:06:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9250772461707536
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2150
- Accuracy: 0.925
- F1: 0.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8031 | 1.0 | 250 | 0.3104 | 0.9055 | 0.9029 |
| 0.2494 | 2.0 | 500 | 0.2150 | 0.925 | 0.9251 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
znsoft/openbuddy_70B_GPTQ
|
znsoft
| 2023-09-05T10:17:40Z | 0 | 1 | null |
[
"zh",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-09-05T10:05:27Z |
---
license: apache-2.0
language:
- zh
- en
---
It is the GPTQ quantized version of OpenBuddy/openbuddy-llama2-70b-v10.1-bf16 .
|
brunoqgalvao/whisper-small-pt-br
|
brunoqgalvao
| 2023-09-05T10:12:13Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T01:54:52Z |
---
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper Small Pt - brunoqgalvao
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Pt - brunoqgalvao
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
souljoy/sd-pokemon-model-lora-zh
|
souljoy
| 2023-09-05T10:11:08Z | 7 | 5 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"lora",
"zh",
"dataset:svjack/pokemon-blip-captions-en-zh",
"license:apache-2.0",
"region:us"
] | null | 2023-03-23T08:49:31Z |
---
license: apache-2.0
datasets:
- svjack/pokemon-blip-captions-en-zh
language:
- zh
library_name: diffusers
tags:
- stable-diffusion
- lora
---
# Model Card for souljoy/sd-pokemon-model-lora-zh
## Installation
```bash
pip install -U diffusers
pip install transformers
```
## Usage
```python
from diffusers import StableDiffusionPipeline
import torch
pipe = StableDiffusionPipeline.from_pretrained("IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Anime-Chinese-v0.1", torch_dtype=torch.float16)
model_path = "souljoy/sd-pokemon-model-lora-zh"
pipe.unet.load_attn_procs(model_path)
pipe.to("cuda")
prompt = "粉色的蝴蝶"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]
image
prompt = "可爱的狗"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]
image
prompt = "漂亮的猫"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]
image
```



|
iprojas/ToD
|
iprojas
| 2023-09-05T10:10:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-09-01T05:15:28Z |
pip install jupyter-archive
wget -P workspace/stable-diffusion-webui/models/Stable-diffusion https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0_0.9vae.safetensors
wget -P workspace/stable-diffusion-webui/models/Lora https://huggingface.co/iprojas/ToD/resolve/main/P1980.safetensors
wget -P workspace/stable-diffusion-webui/models/Lora https://huggingface.co/iprojas/ToD/resolve/main/P1980-000009.safetensors
wget -P workspace/stable-diffusion-webui/models/Lora https://huggingface.co/iprojas/ToD/resolve/main/P1980-000008.safetensors
wget -P workspace/stable-diffusion-webui/models/Lora https://huggingface.co/iprojas/ToD/resolve/main/P1980-000007.safetensors
wget -P workspace/stable-diffusion-webui/models/Lora https://huggingface.co/iprojas/ToD/resolve/main/P1980-000006.safetensors
wget -P workspace/stable-diffusion-webui/models/Lora https://huggingface.co/iprojas/ToD/resolve/main/P1940.safetensors
|
H-amza/a2c-PandaReachDense-v3
|
H-amza
| 2023-09-05T10:06:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-05T10:00:49Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.45 +/- 0.76
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Jung/ppo-LunarLander-v2
|
Jung
| 2023-09-05T10:02:46Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-03-06T04:57:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 67.84 +/- 119.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
prasanthntu/ppo-LunarLander-v2
|
prasanthntu
| 2023-09-05T10:00:46Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-12T05:16:24Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.58 +/- 24.48
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KnutJaegersberg/RWKV-4-PilePlus-430M-20230520-6162-1018Gtokens-ctx4098
|
KnutJaegersberg
| 2023-09-05T09:59:27Z | 138 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-09-05T08:54:51Z |
---
license: apache-2.0
---
This is just a standard conversion to hf transformers format of models from here:
https://huggingface.co/BlinkDL/rwkv-4-pileplus
According to the documentation I found, this model should have seen roundabout 1.3 trillion tokens!
|
PraveenJesu/whisper-medium-176-peft-V1-drug_random_8_batch8
|
PraveenJesu
| 2023-09-05T09:58:07Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T09:43:25Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
nightdude/config_811357
|
nightdude
| 2023-09-05T09:49:06Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T09:45:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
bigmorning/whisper_input_decoder_shift_r_labels_with_force__0080
|
bigmorning
| 2023-09-05T09:41:08Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T09:41:01Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_with_force__0080
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_with_force__0080
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0012
- Train Accuracy: 0.0340
- Train Wermet: 29.4939
- Validation Loss: 0.8115
- Validation Accuracy: 0.0210
- Validation Wermet: 43.5361
- Epoch: 79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6249 | 0.0091 | 1.7162 | 4.2965 | 0.0094 | 0.9447 | 0 |
| 4.9223 | 0.0099 | 0.9041 | 4.1562 | 0.0097 | 0.9327 | 1 |
| 4.6814 | 0.0107 | 0.8376 | 3.9245 | 0.0103 | 0.8927 | 2 |
| 4.4407 | 0.0114 | 0.8311 | 3.7252 | 0.0107 | 0.8775 | 3 |
| 4.2445 | 0.0119 | 0.8228 | 3.6283 | 0.0108 | 0.8695 | 4 |
| 4.0889 | 0.0123 | 0.8067 | 3.5310 | 0.0110 | 0.8916 | 5 |
| 3.9575 | 0.0127 | 0.7908 | 3.4478 | 0.0113 | 0.8407 | 6 |
| 3.8547 | 0.0130 | 0.7781 | 3.4227 | 0.0113 | 0.8670 | 7 |
| 3.7599 | 0.0133 | 0.7654 | 3.3519 | 0.0115 | 0.8375 | 8 |
| 3.6763 | 0.0136 | 0.7543 | 3.3183 | 0.0116 | 0.8678 | 9 |
| 3.6006 | 0.0138 | 0.7421 | 3.2581 | 0.0117 | 0.8120 | 10 |
| 3.5300 | 0.0140 | 0.7296 | 3.2415 | 0.0118 | 0.8257 | 11 |
| 3.4554 | 0.0143 | 0.7179 | 3.2163 | 0.0119 | 0.8078 | 12 |
| 3.3930 | 0.0145 | 0.7057 | 3.1612 | 0.0121 | 0.7758 | 13 |
| 3.3218 | 0.0148 | 0.6946 | 3.1357 | 0.0122 | 0.7760 | 14 |
| 3.2424 | 0.0151 | 0.6806 | 3.0812 | 0.0123 | 0.7639 | 15 |
| 3.1577 | 0.0155 | 0.6633 | 3.0193 | 0.0126 | 0.7428 | 16 |
| 3.0655 | 0.0159 | 0.6454 | 2.9643 | 0.0128 | 0.7423 | 17 |
| 2.9579 | 0.0164 | 0.6271 | 2.8510 | 0.0132 | 0.7103 | 18 |
| 2.8149 | 0.0170 | 0.6022 | 2.7020 | 0.0136 | 0.6811 | 19 |
| 2.6475 | 0.0178 | 0.5775 | 2.5406 | 0.0142 | 0.6495 | 20 |
| 2.4340 | 0.0189 | 0.5451 | 2.3364 | 0.0149 | 0.6166 | 21 |
| 2.2002 | 0.0200 | 0.5065 | 2.1300 | 0.0155 | 0.5766 | 22 |
| 1.9511 | 0.0213 | 0.4658 | 1.9335 | 0.0162 | 0.5419 | 23 |
| 1.6777 | 0.0228 | 0.4184 | 1.7327 | 0.0169 | 0.5071 | 24 |
| 1.4282 | 0.0243 | 0.3754 | 1.5461 | 0.0176 | 0.4669 | 25 |
| 1.2219 | 0.0255 | 0.3365 | 1.4027 | 0.0181 | 0.4326 | 26 |
| 1.0535 | 0.0265 | 0.3016 | 1.2979 | 0.0185 | 0.4134 | 27 |
| 0.9205 | 0.0274 | 0.2731 | 1.1891 | 0.0189 | 0.3843 | 28 |
| 0.8079 | 0.0281 | 0.2453 | 1.1135 | 0.0192 | 0.3659 | 29 |
| 0.7140 | 0.0288 | 0.2218 | 1.0532 | 0.0195 | 0.3495 | 30 |
| 0.6318 | 0.0293 | 0.1975 | 0.9976 | 0.0197 | 0.3351 | 31 |
| 0.5623 | 0.0298 | 0.1770 | 0.9571 | 0.0199 | 0.3256 | 32 |
| 0.4990 | 0.0303 | 0.1582 | 0.9184 | 0.0200 | 0.3147 | 33 |
| 0.4444 | 0.0307 | 0.1424 | 0.8865 | 0.0202 | 0.3062 | 34 |
| 0.3949 | 0.0311 | 0.1260 | 0.8532 | 0.0203 | 0.2968 | 35 |
| 0.3505 | 0.0314 | 0.1118 | 0.8333 | 0.0204 | 0.2898 | 36 |
| 0.3104 | 0.0317 | 0.0988 | 0.8245 | 0.0204 | 0.2881 | 37 |
| 0.2743 | 0.0321 | 0.0886 | 0.8014 | 0.0205 | 0.2825 | 38 |
| 0.2428 | 0.0323 | 0.0842 | 0.7944 | 0.0206 | 0.2794 | 39 |
| 0.2120 | 0.0326 | 0.0880 | 0.7742 | 0.0206 | 0.2762 | 40 |
| 0.1863 | 0.0328 | 0.1289 | 0.7744 | 0.0206 | 0.2863 | 41 |
| 0.1621 | 0.0330 | 0.1792 | 0.7683 | 0.0207 | 0.2873 | 42 |
| 0.1390 | 0.0332 | 0.1918 | 0.7664 | 0.0207 | 0.4006 | 43 |
| 0.1194 | 0.0334 | 0.3137 | 0.7596 | 0.0207 | 0.5479 | 44 |
| 0.1022 | 0.0335 | 0.5546 | 0.7607 | 0.0208 | 0.8384 | 45 |
| 0.0880 | 0.0337 | 0.9275 | 0.7595 | 0.0208 | 0.8106 | 46 |
| 0.0740 | 0.0338 | 1.7784 | 0.7555 | 0.0208 | 0.9209 | 47 |
| 0.0622 | 0.0338 | 2.6518 | 0.7572 | 0.0208 | 2.2106 | 48 |
| 0.0528 | 0.0339 | 2.2627 | 0.7565 | 0.0208 | 1.4870 | 49 |
| 0.0453 | 0.0339 | 4.0945 | 0.7590 | 0.0208 | 3.1276 | 50 |
| 0.0407 | 0.0339 | 6.6959 | 0.7542 | 0.0208 | 6.1620 | 51 |
| 0.0327 | 0.0340 | 7.6116 | 0.7558 | 0.0208 | 8.6756 | 52 |
| 0.0264 | 0.0340 | 11.0921 | 0.7526 | 0.0209 | 7.4669 | 53 |
| 0.0222 | 0.0340 | 9.9266 | 0.7573 | 0.0209 | 7.1746 | 54 |
| 0.0189 | 0.0340 | 10.5104 | 0.7622 | 0.0209 | 8.7817 | 55 |
| 0.0159 | 0.0340 | 11.4594 | 0.7671 | 0.0209 | 13.3827 | 56 |
| 0.0134 | 0.0340 | 12.9412 | 0.7711 | 0.0209 | 15.0106 | 57 |
| 0.0115 | 0.0340 | 14.8090 | 0.7737 | 0.0209 | 13.2722 | 58 |
| 0.0099 | 0.0340 | 15.5619 | 0.7767 | 0.0209 | 15.6065 | 59 |
| 0.0086 | 0.0340 | 16.8891 | 0.7814 | 0.0209 | 14.4270 | 60 |
| 0.0074 | 0.0340 | 19.8526 | 0.7818 | 0.0209 | 23.6084 | 61 |
| 0.0224 | 0.0339 | 25.5730 | 0.7659 | 0.0209 | 31.2366 | 62 |
| 0.0171 | 0.0340 | 20.4968 | 0.7611 | 0.0210 | 13.3031 | 63 |
| 0.0099 | 0.0340 | 13.3312 | 0.7636 | 0.0210 | 8.2061 | 64 |
| 0.0068 | 0.0340 | 11.3060 | 0.7556 | 0.0210 | 11.1380 | 65 |
| 0.0053 | 0.0340 | 13.2403 | 0.7613 | 0.0210 | 8.9220 | 66 |
| 0.0045 | 0.0340 | 13.2371 | 0.7722 | 0.0210 | 14.7377 | 67 |
| 0.0040 | 0.0340 | 15.3554 | 0.7668 | 0.0210 | 15.6763 | 68 |
| 0.0035 | 0.0340 | 16.6461 | 0.7735 | 0.0210 | 16.8715 | 69 |
| 0.0032 | 0.0340 | 18.7469 | 0.7757 | 0.0210 | 18.4540 | 70 |
| 0.0029 | 0.0340 | 19.8120 | 0.7846 | 0.0210 | 22.5624 | 71 |
| 0.0026 | 0.0340 | 21.6212 | 0.7821 | 0.0210 | 21.7265 | 72 |
| 0.0023 | 0.0340 | 22.7740 | 0.7886 | 0.0210 | 26.6152 | 73 |
| 0.0021 | 0.0340 | 25.1048 | 0.7907 | 0.0210 | 28.2909 | 74 |
| 0.0019 | 0.0340 | 24.5815 | 0.7949 | 0.0210 | 38.3868 | 75 |
| 0.0017 | 0.0340 | 27.4132 | 0.7962 | 0.0210 | 31.9349 | 76 |
| 0.0015 | 0.0340 | 27.7907 | 0.8016 | 0.0210 | 41.0701 | 77 |
| 0.0013 | 0.0340 | 28.5343 | 0.8074 | 0.0210 | 34.6894 | 78 |
| 0.0012 | 0.0340 | 29.4939 | 0.8115 | 0.0210 | 43.5361 | 79 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
emotibot-inc/Zhuzhi-6B
|
emotibot-inc
| 2023-09-05T09:34:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"feature-extraction",
"custom_code",
"region:us"
] |
feature-extraction
| 2023-08-28T07:23:51Z |
# README
# Zhuzhi-6B
[Hugging Face](https://huggingface.co/emotibot-inc/Zhuzhi-6B) | [GitHub](https://github.com/emotibot-inc/Zhuzhi-6B) | [Model Scope](https://modelscope.cn/models/emotibotinc/Zhuzhi-6B/summary) | [Emotibrain](https://brain.emotibot.com/?source=zhuzhi6b_huggingface)
# **模型介绍**
竹智是由竹间智能使用中英文对话数据微调的中英双语对话模型。训练时基本上沿袭官方的多轮对话数据组织格式,具有对话流畅、部署门槛较低等众多优秀特性:
- 较长的上下文:在对话阶段使用 8K 的上下文长度训练。
- 更高效的推理:在 INT4 量化下,6G 显存支持的对话长度由 1K 提升到了 8K,推理速度也有较大的提升。
# Model **benchmark**
## **中文评测** - **CMMLU**
### Result
| Model 5-shot | STEM | Humanities | Social Science | Other | China-specific | Average |
| --- | --- | --- | --- | --- | --- | --- |
| Multilingual-oriented | | | | | | |
| [GPT4](https://openai.com/gpt4) | 65.23 | 72.11 | 72.06 | 74.79 | 66.12 | 70.95 |
| [ChatGPT](https://openai.com/chatgpt) | 47.81 | 55.68 | 56.50 | 62.66 | 50.69 | 55.51 |
| [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | 33.33 | 43.46 | 44.28 | 44.75 | 39.46 | 41.45 |
| [LLaMA-65B](https://github.com/facebookresearch/llama) | 34.47 | 40.24 | 41.55 | 42.88 | 37.00 | 39.80 |
| [BLOOMZ-7B](https://github.com/bigscience-workshop/xmtf) | 30.56 | 39.10 | 38.59 | 40.32 | 37.15 | 37.04 |
| [Bactrian-LLaMA-13B](https://github.com/mbzuai-nlp/bactrian-x) | 27.52 | 32.47 | 32.27 | 35.77 | 31.56 | 31.88 |
| Chinese-oriented | | | | | | |
| [Zhuzhi-6B](https://github.com/emotibot-inc/Zhuzhi-6B) | 40.30 | 48.08 | 46.72 | 47.41 | 45.51 | 45.60 |
| [Zhuhai-13B](https://github.com/emotibot-inc/Zhuhai-13B) | 42.39 | 61.57 | 60.48 | 58.57 | 55.68 | 55.74 |
| [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) | 42.38 | 61.61 | 60.44 | 59.26 | 56.62 | 55.82 |
| [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) | 42.55 | 50.98 | 50.99 | 50.80 | 48.37 | 48.80 |
| [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | 35.25 | 48.07 | 47.88 | 46.61 | 44.14 | 44.43 |
| [ChatGLM-6B](https://github.com/THUDM/GLM-130B) | 32.35 | 39.22 | 39.65 | 38.62 | 37.70 | 37.48 |
| [BatGPT-15B](https://github.com/haonan-li/CMMLU/blob/master) | 34.96 | 35.45 | 36.31 | 42.14 | 37.89 | 37.16 |
| [Chinese-LLaMA-13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | 27.12 | 33.18 | 34.87 | 35.10 | 32.97 | 32.63 |
| [MOSS-SFT-16B](https://github.com/OpenLMLab/MOSS) | 27.23 | 30.41 | 28.84 | 32.56 | 28.68 | 29.57 |
| [Chinese-GLM-10B](https://github.com/THUDM/GLM) | 25.49 | 27.05 | 27.42 | 29.21 | 28.05 | 27.26 |
| Random | 25.00 | 25.00 | 25.00 | 25.00 | 25.00 | 25.00 |
| Model 0-shot | STEM | Humanities | Social Science | Other | China-specific | Average |
| --- | --- | --- | --- | --- | --- | --- |
| Multilingual-oriented | | | | | | |
| [GPT4](https://openai.com/gpt4) | 63.16 | 69.19 | 70.26 | 73.16 | 63.47 | 68.9 |
| [ChatGPT](https://openai.com/chatgpt) | 44.8 | 53.61 | 54.22 | 59.95 | 49.74 | 53.22 |
| [BLOOMZ-7B](https://github.com/bigscience-workshop/xmtf) | 33.03 | 45.74 | 45.74 | 46.25 | 41.58 | 42.8 |
| [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | 31.11 | 41.3 | 40.87 | 40.61 | 36.05 | 38.5 |
| [LLaMA-65B](https://github.com/facebookresearch/llama) | 31.09 | 34.45 | 36.05 | 37.94 | 32.89 | 34.88 |
| [Bactrian-LLaMA-13B](https://github.com/mbzuai-nlp/bactrian-x) | 26.46 | 29.36 | 31.81 | 31.55 | 29.17 | 30.06 |
| Chinese-oriented | | | | | | |
| [Zhuzhi-6B](https://github.com/emotibot-inc/Zhuzhi-6B) | 42.51 | 48.91 | 48.85 | 50.25 | 47.57 | 47.62 |
| [Zhuhai-13B](https://github.com/emotibot-inc/Zhuhai-13B) | 42.37 | 60.97 | 59.71 | 56.35 | 54.81 | 54.84 |
| [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) | 42.04 | 60.49 | 59.55 | 56.6 | 55.72 | 54.63 |
| [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b) | 41.28 | 52.85 | 53.37 | 52.24 | 50.58 | 49.95 |
| [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | 32.79 | 44.43 | 46.78 | 44.79 | 43.11 | 42.33 |
| [ChatGLM-6B](https://github.com/THUDM/GLM-130B) | 32.22 | 42.91 | 44.81 | 42.6 | 41.93 | 40.79 |
| [BatGPT-15B](https://github.com/haonan-li/CMMLU/blob/master) | 33.72 | 36.53 | 38.07 | 46.94 | 38.32 | 38.51 |
| [Chinese-LLaMA-13B](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | 26.76 | 26.57 | 27.42 | 28.33 | 26.73 | 27.34 |
| [MOSS-SFT-16B](https://github.com/OpenLMLab/MOSS) | 25.68 | 26.35 | 27.21 | 27.92 | 26.7 | 26.88 |
| [Chinese-GLM-10B](https://github.com/THUDM/GLM) | 25.57 | 25.01 | 26.33 | 25.94 | 25.81 | 25.8 |
| Random | 25 | 25 | 25 | 25 | 25 | 25 |
# **推理对话**
您可以直接注册并登录竹间智能科技发布的大模型产品 [Emotibrain](https://brain.emotibot.com/?source=zhuzhi6b_huggingface),并选择 **CoPilot**(**KKBot**) 进行的在线测试,注册即可立即使用;

# **模型训练**
您可以直接注册并登录竹间智能科技发布的大模型产品 [Emotibrain](https://brain.emotibot.com/?source=zhuzhi6b_huggingface),并选择 Fine-tune 进行 **0 代码微调**,注册即可立即使用;
详细的训练流程您可以浏览此文档:[Emotibrain 快速入门](https://brain.emotibot.com/supports/model-factory/dash-into.html)(大约 5 分钟)


# **更多信息**
若您想了解更多 大模型训练平台 的相关信息,请访问 [Emotibrain 官网](https://brain.emotibot.com/?source=zhuzhi6b_huggingface) 进行了解;
|
Dukul/llama2-finetunined-on-text-to-sql-3000
|
Dukul
| 2023-09-05T09:32:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T09:32:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
nbogdan/flant5-large-2ex-paraphrasing-1epochs
|
nbogdan
| 2023-09-05T09:31:58Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T09:31:44Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-large-2ex-paraphrasing-1epochs` for google/flan-t5-large
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-large")
adapter_name = model.load_adapter("nbogdan/flant5-large-2ex-paraphrasing-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
zake7749/chinese-lyrics-generation-mass
|
zake7749
| 2023-09-05T09:30:38Z | 195 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"zh",
"arxiv:1905.02450",
"license:bigscience-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-16T11:22:21Z |
---
license: bigscience-openrail-m
language:
- zh
pipeline_tag: text2text-generation
thumbnail: Chinese Lyrics Generation with Masked Sequence-to-Sequence Pretraining.
---
# Chinese Generation with Masked Sequence-to-Sequence Pretraining
This repository demostrates a format-controllable Chinese lyric generator, fine-tuned on [Chinese-Lyric-Corpus](https://github.com/gaussic/Chinese-Lyric-Corpus) using a [MASS](https://arxiv.org/abs/1905.02450)-like strategy.
# Usage
## Initialization
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer, Text2TextGenerationPipeline
model_path = "zake7749/chinese-lyrics-generation-mass"
model = MT5ForConditionalGeneration.from_pretrained(model_path)
tokenizer = MT5Tokenizer.from_pretrained(model_path)
pipe = Text2TextGenerationPipeline(model=model, tokenizer=tokenizer)
```
## Generate lyrics with a template
```python
template = "風花雪月。像XXXXXXXXXX。日升月落。仿若XXXXXXXXXX。"
lyric = pipe(template, max_length=128, top_p=0.8, do_sample=True, repetition_penalty=1.2)[0]['generated_text']
print(lyric) # 風花雪月。像你在我的夢裡慢慢散落。日升月落。仿若我宿命無法陪隨你走過。
template = "XXXXXXX留戀。XXXXXXX。XXX燈火XXXX。XXX手牽手XXXX。"
lyric = pipe(template, max_length=128, top_p=0.8, do_sample=True, repetition_penalty=1.2)[0]['generated_text']
print(lyric) # 我們說好一生不留戀。我們相約在夏天。我們的燈火相偎相牽。我們說好手牽手到永遠。
```
## Acrostic
```python
template = "分XXXXXX。手XXXXXXXXX。之XXXXXXX。後XXXXXXXXX。"
lyric = pipe(template, max_length=128, top_p=0.8, do_sample=True, repetition_penalty=1.2)[0]['generated_text']
print(lyric) # 分開後激情浮現。手牽著手走過的那一天。之間有太多的危險。後悔一點點,傷心一片。
```
## Completion
```python
template = "餘生的光陰牽你手前行。我們共赴一場光年的旅行。XXXXXXXXXX。XXXXXXXXXXXX。"
lyric = pipe(template, max_length=128, top_p=0.8, do_sample=True, repetition_penalty=1.2)[0]['generated_text']
print(lyric) # 餘生的光陰牽你手前行。我們共赴一場光年的旅行。走過的經歷新舊的記憶。都是帶著珍珠淚水無法代替。
```
## Random Generation
```python
import random
num_example = 5
min_sentence_num, max_sentence_num = 2, 5
min_characher_num, max_character_num = 4, 10
for example_id in range(num_example):
num_sentences = random.randint(min_sentence_num, max_sentence_num)
num_words = ["X" * random.randint(min_characher_num, max_character_num)
for _ in range(num_sentences)]
template = "。".join(num_words) + "。"
lyric = pipe(template, max_length=128, top_p=0.8, do_sample=True, repetition_penalty=1.2)[0]['generated_text']
print(f"{example_id + 1}. {lyric}")
# 1. 愛不愛我。讓自己難過。你的擁抱是那麼多。
# 2. 那一天我們重相見。你已站在那個熟悉的街邊。讓我魂牽夢繞在肩。有你的明天。不再留戀。飛過天邊。
# 3. 誰知我們入骨的相思。深深地被俘虜。苦澀滋味含在茶中傾訴。餘情未了落幕。愛到痛處奢望幸福。
# 4. 為什麼你一直讓我傷心。總覺得對你太著迷。
# 5. 一點可憐。還在期待你會出現。哪怕只是匆匆一眼。
```
# Note
1. The model is still under training, so sometimes it might not follow the template explicitly, especially for long sequences generation.
2. The model would output `,` as a pause in the lyric, for example `我的愛,像潮水。`. If you don't need the pause, you can add the id of `,` to `bad_words_ids`.
3. The model was only fine-tuned on traditional Chinese corpus which leads to a bit unstable performance in simplified Chinese.
4. When there are no/few keywords in the given input, the model may **combine snippets from real world songs** to fit the template.
# Disclaimer
This lyric generator is for academic purposes only. Users of this model should exercise caution and carefully evaluate the results before using them for any commercial or non-academic purpose. We are not liable for any damages or losses resulting from the use or misuse of the model.
|
Bazaar/cv_solar_panel_defect_detection
|
Bazaar
| 2023-09-05T09:26:04Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-05T09:17:54Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: cv_solar_panel_defect_detection
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8091602921485901
---
# cv_solar_panel_defect_detection
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### medium defect

#### minor defect

#### no defect

#### serious defect

|
terjenf/Reinforce-CartPole-v1
|
terjenf
| 2023-09-05T09:15:32Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-05T09:15:23Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
bigmorning/whisper_input_decoder_shift_r_labels_with_force__0065
|
bigmorning
| 2023-09-05T09:01:23Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T09:01:15Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_with_force__0065
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_with_force__0065
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0099
- Train Accuracy: 0.0340
- Train Wermet: 13.3312
- Validation Loss: 0.7636
- Validation Accuracy: 0.0210
- Validation Wermet: 8.2061
- Epoch: 64
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6249 | 0.0091 | 1.7162 | 4.2965 | 0.0094 | 0.9447 | 0 |
| 4.9223 | 0.0099 | 0.9041 | 4.1562 | 0.0097 | 0.9327 | 1 |
| 4.6814 | 0.0107 | 0.8376 | 3.9245 | 0.0103 | 0.8927 | 2 |
| 4.4407 | 0.0114 | 0.8311 | 3.7252 | 0.0107 | 0.8775 | 3 |
| 4.2445 | 0.0119 | 0.8228 | 3.6283 | 0.0108 | 0.8695 | 4 |
| 4.0889 | 0.0123 | 0.8067 | 3.5310 | 0.0110 | 0.8916 | 5 |
| 3.9575 | 0.0127 | 0.7908 | 3.4478 | 0.0113 | 0.8407 | 6 |
| 3.8547 | 0.0130 | 0.7781 | 3.4227 | 0.0113 | 0.8670 | 7 |
| 3.7599 | 0.0133 | 0.7654 | 3.3519 | 0.0115 | 0.8375 | 8 |
| 3.6763 | 0.0136 | 0.7543 | 3.3183 | 0.0116 | 0.8678 | 9 |
| 3.6006 | 0.0138 | 0.7421 | 3.2581 | 0.0117 | 0.8120 | 10 |
| 3.5300 | 0.0140 | 0.7296 | 3.2415 | 0.0118 | 0.8257 | 11 |
| 3.4554 | 0.0143 | 0.7179 | 3.2163 | 0.0119 | 0.8078 | 12 |
| 3.3930 | 0.0145 | 0.7057 | 3.1612 | 0.0121 | 0.7758 | 13 |
| 3.3218 | 0.0148 | 0.6946 | 3.1357 | 0.0122 | 0.7760 | 14 |
| 3.2424 | 0.0151 | 0.6806 | 3.0812 | 0.0123 | 0.7639 | 15 |
| 3.1577 | 0.0155 | 0.6633 | 3.0193 | 0.0126 | 0.7428 | 16 |
| 3.0655 | 0.0159 | 0.6454 | 2.9643 | 0.0128 | 0.7423 | 17 |
| 2.9579 | 0.0164 | 0.6271 | 2.8510 | 0.0132 | 0.7103 | 18 |
| 2.8149 | 0.0170 | 0.6022 | 2.7020 | 0.0136 | 0.6811 | 19 |
| 2.6475 | 0.0178 | 0.5775 | 2.5406 | 0.0142 | 0.6495 | 20 |
| 2.4340 | 0.0189 | 0.5451 | 2.3364 | 0.0149 | 0.6166 | 21 |
| 2.2002 | 0.0200 | 0.5065 | 2.1300 | 0.0155 | 0.5766 | 22 |
| 1.9511 | 0.0213 | 0.4658 | 1.9335 | 0.0162 | 0.5419 | 23 |
| 1.6777 | 0.0228 | 0.4184 | 1.7327 | 0.0169 | 0.5071 | 24 |
| 1.4282 | 0.0243 | 0.3754 | 1.5461 | 0.0176 | 0.4669 | 25 |
| 1.2219 | 0.0255 | 0.3365 | 1.4027 | 0.0181 | 0.4326 | 26 |
| 1.0535 | 0.0265 | 0.3016 | 1.2979 | 0.0185 | 0.4134 | 27 |
| 0.9205 | 0.0274 | 0.2731 | 1.1891 | 0.0189 | 0.3843 | 28 |
| 0.8079 | 0.0281 | 0.2453 | 1.1135 | 0.0192 | 0.3659 | 29 |
| 0.7140 | 0.0288 | 0.2218 | 1.0532 | 0.0195 | 0.3495 | 30 |
| 0.6318 | 0.0293 | 0.1975 | 0.9976 | 0.0197 | 0.3351 | 31 |
| 0.5623 | 0.0298 | 0.1770 | 0.9571 | 0.0199 | 0.3256 | 32 |
| 0.4990 | 0.0303 | 0.1582 | 0.9184 | 0.0200 | 0.3147 | 33 |
| 0.4444 | 0.0307 | 0.1424 | 0.8865 | 0.0202 | 0.3062 | 34 |
| 0.3949 | 0.0311 | 0.1260 | 0.8532 | 0.0203 | 0.2968 | 35 |
| 0.3505 | 0.0314 | 0.1118 | 0.8333 | 0.0204 | 0.2898 | 36 |
| 0.3104 | 0.0317 | 0.0988 | 0.8245 | 0.0204 | 0.2881 | 37 |
| 0.2743 | 0.0321 | 0.0886 | 0.8014 | 0.0205 | 0.2825 | 38 |
| 0.2428 | 0.0323 | 0.0842 | 0.7944 | 0.0206 | 0.2794 | 39 |
| 0.2120 | 0.0326 | 0.0880 | 0.7742 | 0.0206 | 0.2762 | 40 |
| 0.1863 | 0.0328 | 0.1289 | 0.7744 | 0.0206 | 0.2863 | 41 |
| 0.1621 | 0.0330 | 0.1792 | 0.7683 | 0.0207 | 0.2873 | 42 |
| 0.1390 | 0.0332 | 0.1918 | 0.7664 | 0.0207 | 0.4006 | 43 |
| 0.1194 | 0.0334 | 0.3137 | 0.7596 | 0.0207 | 0.5479 | 44 |
| 0.1022 | 0.0335 | 0.5546 | 0.7607 | 0.0208 | 0.8384 | 45 |
| 0.0880 | 0.0337 | 0.9275 | 0.7595 | 0.0208 | 0.8106 | 46 |
| 0.0740 | 0.0338 | 1.7784 | 0.7555 | 0.0208 | 0.9209 | 47 |
| 0.0622 | 0.0338 | 2.6518 | 0.7572 | 0.0208 | 2.2106 | 48 |
| 0.0528 | 0.0339 | 2.2627 | 0.7565 | 0.0208 | 1.4870 | 49 |
| 0.0453 | 0.0339 | 4.0945 | 0.7590 | 0.0208 | 3.1276 | 50 |
| 0.0407 | 0.0339 | 6.6959 | 0.7542 | 0.0208 | 6.1620 | 51 |
| 0.0327 | 0.0340 | 7.6116 | 0.7558 | 0.0208 | 8.6756 | 52 |
| 0.0264 | 0.0340 | 11.0921 | 0.7526 | 0.0209 | 7.4669 | 53 |
| 0.0222 | 0.0340 | 9.9266 | 0.7573 | 0.0209 | 7.1746 | 54 |
| 0.0189 | 0.0340 | 10.5104 | 0.7622 | 0.0209 | 8.7817 | 55 |
| 0.0159 | 0.0340 | 11.4594 | 0.7671 | 0.0209 | 13.3827 | 56 |
| 0.0134 | 0.0340 | 12.9412 | 0.7711 | 0.0209 | 15.0106 | 57 |
| 0.0115 | 0.0340 | 14.8090 | 0.7737 | 0.0209 | 13.2722 | 58 |
| 0.0099 | 0.0340 | 15.5619 | 0.7767 | 0.0209 | 15.6065 | 59 |
| 0.0086 | 0.0340 | 16.8891 | 0.7814 | 0.0209 | 14.4270 | 60 |
| 0.0074 | 0.0340 | 19.8526 | 0.7818 | 0.0209 | 23.6084 | 61 |
| 0.0224 | 0.0339 | 25.5730 | 0.7659 | 0.0209 | 31.2366 | 62 |
| 0.0171 | 0.0340 | 20.4968 | 0.7611 | 0.0210 | 13.3031 | 63 |
| 0.0099 | 0.0340 | 13.3312 | 0.7636 | 0.0210 | 8.2061 | 64 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
YiYiXu/pokeman_kandinsky_decoder_lora
|
YiYiXu
| 2023-09-05T08:43:25Z | 4 | 0 |
diffusers
|
[
"diffusers",
"kandinsky",
"text-to-image",
"lora",
"base_model:kandinsky-community/kandinsky-2-2-decoder",
"base_model:adapter:kandinsky-community/kandinsky-2-2-decoder",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-05T08:31:22Z |
---
license: creativeml-openrail-m
base_model: kandinsky-community/kandinsky-2-2-decoder
tags:
- kandinsky
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - YiYiXu/pokeman_kandinsky_decoder_lora
These are LoRA adaption weights for kandinsky-community/kandinsky-2-2-decoder. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
MilosCuculovic/review-annotation
|
MilosCuculovic
| 2023-09-05T08:36:32Z | 31 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-07-31T13:30:29Z |
The review-annotation model is performing NER and is able to annotate academic review comments by identifying the four meaningful classes:
- location
- action
- modal
- trigger
This is a SciBERT fine-tuned model in supervised mode using a human pre-annotated dataset of several thousands real-life peer-review comments.
This model is being discussed in the following conference paper: https://ieeexplore.ieee.org/abstract/document/10229657
|
SundayNwovu/redo_reschedular2
|
SundayNwovu
| 2023-09-05T08:27:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-04T17:57:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
|
bigmorning/whisper_input_decoder_shift_r_labels_with_force__0050
|
bigmorning
| 2023-09-05T08:21:35Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T08:21:28Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_with_force__0050
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_with_force__0050
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0528
- Train Accuracy: 0.0339
- Train Wermet: 2.2627
- Validation Loss: 0.7565
- Validation Accuracy: 0.0208
- Validation Wermet: 1.4870
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6249 | 0.0091 | 1.7162 | 4.2965 | 0.0094 | 0.9447 | 0 |
| 4.9223 | 0.0099 | 0.9041 | 4.1562 | 0.0097 | 0.9327 | 1 |
| 4.6814 | 0.0107 | 0.8376 | 3.9245 | 0.0103 | 0.8927 | 2 |
| 4.4407 | 0.0114 | 0.8311 | 3.7252 | 0.0107 | 0.8775 | 3 |
| 4.2445 | 0.0119 | 0.8228 | 3.6283 | 0.0108 | 0.8695 | 4 |
| 4.0889 | 0.0123 | 0.8067 | 3.5310 | 0.0110 | 0.8916 | 5 |
| 3.9575 | 0.0127 | 0.7908 | 3.4478 | 0.0113 | 0.8407 | 6 |
| 3.8547 | 0.0130 | 0.7781 | 3.4227 | 0.0113 | 0.8670 | 7 |
| 3.7599 | 0.0133 | 0.7654 | 3.3519 | 0.0115 | 0.8375 | 8 |
| 3.6763 | 0.0136 | 0.7543 | 3.3183 | 0.0116 | 0.8678 | 9 |
| 3.6006 | 0.0138 | 0.7421 | 3.2581 | 0.0117 | 0.8120 | 10 |
| 3.5300 | 0.0140 | 0.7296 | 3.2415 | 0.0118 | 0.8257 | 11 |
| 3.4554 | 0.0143 | 0.7179 | 3.2163 | 0.0119 | 0.8078 | 12 |
| 3.3930 | 0.0145 | 0.7057 | 3.1612 | 0.0121 | 0.7758 | 13 |
| 3.3218 | 0.0148 | 0.6946 | 3.1357 | 0.0122 | 0.7760 | 14 |
| 3.2424 | 0.0151 | 0.6806 | 3.0812 | 0.0123 | 0.7639 | 15 |
| 3.1577 | 0.0155 | 0.6633 | 3.0193 | 0.0126 | 0.7428 | 16 |
| 3.0655 | 0.0159 | 0.6454 | 2.9643 | 0.0128 | 0.7423 | 17 |
| 2.9579 | 0.0164 | 0.6271 | 2.8510 | 0.0132 | 0.7103 | 18 |
| 2.8149 | 0.0170 | 0.6022 | 2.7020 | 0.0136 | 0.6811 | 19 |
| 2.6475 | 0.0178 | 0.5775 | 2.5406 | 0.0142 | 0.6495 | 20 |
| 2.4340 | 0.0189 | 0.5451 | 2.3364 | 0.0149 | 0.6166 | 21 |
| 2.2002 | 0.0200 | 0.5065 | 2.1300 | 0.0155 | 0.5766 | 22 |
| 1.9511 | 0.0213 | 0.4658 | 1.9335 | 0.0162 | 0.5419 | 23 |
| 1.6777 | 0.0228 | 0.4184 | 1.7327 | 0.0169 | 0.5071 | 24 |
| 1.4282 | 0.0243 | 0.3754 | 1.5461 | 0.0176 | 0.4669 | 25 |
| 1.2219 | 0.0255 | 0.3365 | 1.4027 | 0.0181 | 0.4326 | 26 |
| 1.0535 | 0.0265 | 0.3016 | 1.2979 | 0.0185 | 0.4134 | 27 |
| 0.9205 | 0.0274 | 0.2731 | 1.1891 | 0.0189 | 0.3843 | 28 |
| 0.8079 | 0.0281 | 0.2453 | 1.1135 | 0.0192 | 0.3659 | 29 |
| 0.7140 | 0.0288 | 0.2218 | 1.0532 | 0.0195 | 0.3495 | 30 |
| 0.6318 | 0.0293 | 0.1975 | 0.9976 | 0.0197 | 0.3351 | 31 |
| 0.5623 | 0.0298 | 0.1770 | 0.9571 | 0.0199 | 0.3256 | 32 |
| 0.4990 | 0.0303 | 0.1582 | 0.9184 | 0.0200 | 0.3147 | 33 |
| 0.4444 | 0.0307 | 0.1424 | 0.8865 | 0.0202 | 0.3062 | 34 |
| 0.3949 | 0.0311 | 0.1260 | 0.8532 | 0.0203 | 0.2968 | 35 |
| 0.3505 | 0.0314 | 0.1118 | 0.8333 | 0.0204 | 0.2898 | 36 |
| 0.3104 | 0.0317 | 0.0988 | 0.8245 | 0.0204 | 0.2881 | 37 |
| 0.2743 | 0.0321 | 0.0886 | 0.8014 | 0.0205 | 0.2825 | 38 |
| 0.2428 | 0.0323 | 0.0842 | 0.7944 | 0.0206 | 0.2794 | 39 |
| 0.2120 | 0.0326 | 0.0880 | 0.7742 | 0.0206 | 0.2762 | 40 |
| 0.1863 | 0.0328 | 0.1289 | 0.7744 | 0.0206 | 0.2863 | 41 |
| 0.1621 | 0.0330 | 0.1792 | 0.7683 | 0.0207 | 0.2873 | 42 |
| 0.1390 | 0.0332 | 0.1918 | 0.7664 | 0.0207 | 0.4006 | 43 |
| 0.1194 | 0.0334 | 0.3137 | 0.7596 | 0.0207 | 0.5479 | 44 |
| 0.1022 | 0.0335 | 0.5546 | 0.7607 | 0.0208 | 0.8384 | 45 |
| 0.0880 | 0.0337 | 0.9275 | 0.7595 | 0.0208 | 0.8106 | 46 |
| 0.0740 | 0.0338 | 1.7784 | 0.7555 | 0.0208 | 0.9209 | 47 |
| 0.0622 | 0.0338 | 2.6518 | 0.7572 | 0.0208 | 2.2106 | 48 |
| 0.0528 | 0.0339 | 2.2627 | 0.7565 | 0.0208 | 1.4870 | 49 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
YiYiXu/kandinsky_2.2_decoder_lora_pokemon
|
YiYiXu
| 2023-09-05T08:21:09Z | 6 | 0 |
diffusers
|
[
"diffusers",
"kandinsky",
"text-to-image",
"lora",
"base_model:kandinsky-community/kandinsky-2-2-decoder",
"base_model:adapter:kandinsky-community/kandinsky-2-2-decoder",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-09-05T08:04:52Z |
---
license: creativeml-openrail-m
base_model: kandinsky-community/kandinsky-2-2-decoder
tags:
- kandinsky
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - YiYiXu/kandinsky_2.2_decoder_lora_pokemon
These are LoRA adaption weights for kandinsky-community/kandinsky-2-2-decoder. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
YokaiKoibito/falcon-40b-GGML
|
YokaiKoibito
| 2023-09-05T08:16:19Z | 0 | 0 | null |
[
"Transformers",
"falcon",
"text-generation-inference",
"license:other",
"region:us"
] | null | 2023-08-15T01:22:58Z |
---
license: other
tags:
- Transformers
- falcon
- text-generation-inference
---
Some GGML quantizations of for [TII](https://www.tii.ae)'s [tiiuae/falcon-40b](https://huggingface.co/tiiuae/falcon-40b) base model for use with ggllm.cpp.
|
AniketParab/results
|
AniketParab
| 2023-09-05T08:06:30Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-09-04T13:39:30Z |
---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_input_decoder_shift_r_labels_with_force__0040
|
bigmorning
| 2023-09-05T07:55:06Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T07:54:57Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_with_force__0040
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_with_force__0040
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2428
- Train Accuracy: 0.0323
- Train Wermet: 0.0842
- Validation Loss: 0.7944
- Validation Accuracy: 0.0206
- Validation Wermet: 0.2794
- Epoch: 39
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6249 | 0.0091 | 1.7162 | 4.2965 | 0.0094 | 0.9447 | 0 |
| 4.9223 | 0.0099 | 0.9041 | 4.1562 | 0.0097 | 0.9327 | 1 |
| 4.6814 | 0.0107 | 0.8376 | 3.9245 | 0.0103 | 0.8927 | 2 |
| 4.4407 | 0.0114 | 0.8311 | 3.7252 | 0.0107 | 0.8775 | 3 |
| 4.2445 | 0.0119 | 0.8228 | 3.6283 | 0.0108 | 0.8695 | 4 |
| 4.0889 | 0.0123 | 0.8067 | 3.5310 | 0.0110 | 0.8916 | 5 |
| 3.9575 | 0.0127 | 0.7908 | 3.4478 | 0.0113 | 0.8407 | 6 |
| 3.8547 | 0.0130 | 0.7781 | 3.4227 | 0.0113 | 0.8670 | 7 |
| 3.7599 | 0.0133 | 0.7654 | 3.3519 | 0.0115 | 0.8375 | 8 |
| 3.6763 | 0.0136 | 0.7543 | 3.3183 | 0.0116 | 0.8678 | 9 |
| 3.6006 | 0.0138 | 0.7421 | 3.2581 | 0.0117 | 0.8120 | 10 |
| 3.5300 | 0.0140 | 0.7296 | 3.2415 | 0.0118 | 0.8257 | 11 |
| 3.4554 | 0.0143 | 0.7179 | 3.2163 | 0.0119 | 0.8078 | 12 |
| 3.3930 | 0.0145 | 0.7057 | 3.1612 | 0.0121 | 0.7758 | 13 |
| 3.3218 | 0.0148 | 0.6946 | 3.1357 | 0.0122 | 0.7760 | 14 |
| 3.2424 | 0.0151 | 0.6806 | 3.0812 | 0.0123 | 0.7639 | 15 |
| 3.1577 | 0.0155 | 0.6633 | 3.0193 | 0.0126 | 0.7428 | 16 |
| 3.0655 | 0.0159 | 0.6454 | 2.9643 | 0.0128 | 0.7423 | 17 |
| 2.9579 | 0.0164 | 0.6271 | 2.8510 | 0.0132 | 0.7103 | 18 |
| 2.8149 | 0.0170 | 0.6022 | 2.7020 | 0.0136 | 0.6811 | 19 |
| 2.6475 | 0.0178 | 0.5775 | 2.5406 | 0.0142 | 0.6495 | 20 |
| 2.4340 | 0.0189 | 0.5451 | 2.3364 | 0.0149 | 0.6166 | 21 |
| 2.2002 | 0.0200 | 0.5065 | 2.1300 | 0.0155 | 0.5766 | 22 |
| 1.9511 | 0.0213 | 0.4658 | 1.9335 | 0.0162 | 0.5419 | 23 |
| 1.6777 | 0.0228 | 0.4184 | 1.7327 | 0.0169 | 0.5071 | 24 |
| 1.4282 | 0.0243 | 0.3754 | 1.5461 | 0.0176 | 0.4669 | 25 |
| 1.2219 | 0.0255 | 0.3365 | 1.4027 | 0.0181 | 0.4326 | 26 |
| 1.0535 | 0.0265 | 0.3016 | 1.2979 | 0.0185 | 0.4134 | 27 |
| 0.9205 | 0.0274 | 0.2731 | 1.1891 | 0.0189 | 0.3843 | 28 |
| 0.8079 | 0.0281 | 0.2453 | 1.1135 | 0.0192 | 0.3659 | 29 |
| 0.7140 | 0.0288 | 0.2218 | 1.0532 | 0.0195 | 0.3495 | 30 |
| 0.6318 | 0.0293 | 0.1975 | 0.9976 | 0.0197 | 0.3351 | 31 |
| 0.5623 | 0.0298 | 0.1770 | 0.9571 | 0.0199 | 0.3256 | 32 |
| 0.4990 | 0.0303 | 0.1582 | 0.9184 | 0.0200 | 0.3147 | 33 |
| 0.4444 | 0.0307 | 0.1424 | 0.8865 | 0.0202 | 0.3062 | 34 |
| 0.3949 | 0.0311 | 0.1260 | 0.8532 | 0.0203 | 0.2968 | 35 |
| 0.3505 | 0.0314 | 0.1118 | 0.8333 | 0.0204 | 0.2898 | 36 |
| 0.3104 | 0.0317 | 0.0988 | 0.8245 | 0.0204 | 0.2881 | 37 |
| 0.2743 | 0.0321 | 0.0886 | 0.8014 | 0.0205 | 0.2825 | 38 |
| 0.2428 | 0.0323 | 0.0842 | 0.7944 | 0.0206 | 0.2794 | 39 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
gArthur98/Capstone_TinyBert
|
gArthur98
| 2023-09-05T07:41:50Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:huawei-noah/TinyBERT_General_4L_312D",
"base_model:finetune:huawei-noah/TinyBERT_General_4L_312D",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T07:20:34Z |
---
base_model: huawei-noah/TinyBERT_General_4L_312D
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Capstone_TinyBert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Capstone_TinyBert
This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3172
- Accuracy: 0.8772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4628 | 1.0 | 313 | 0.3617 | 0.852 |
| 0.3369 | 2.0 | 626 | 0.3218 | 0.8644 |
| 0.2949 | 3.0 | 939 | 0.3143 | 0.8744 |
| 0.2699 | 4.0 | 1252 | 0.3192 | 0.8718 |
| 0.2481 | 5.0 | 1565 | 0.3172 | 0.8772 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
bigmorning/whisper_input_decoder_shift_r_labels_with_force__0035
|
bigmorning
| 2023-09-05T07:41:48Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T07:41:41Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_with_force__0035
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_with_force__0035
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4444
- Train Accuracy: 0.0307
- Train Wermet: 0.1424
- Validation Loss: 0.8865
- Validation Accuracy: 0.0202
- Validation Wermet: 0.3062
- Epoch: 34
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6249 | 0.0091 | 1.7162 | 4.2965 | 0.0094 | 0.9447 | 0 |
| 4.9223 | 0.0099 | 0.9041 | 4.1562 | 0.0097 | 0.9327 | 1 |
| 4.6814 | 0.0107 | 0.8376 | 3.9245 | 0.0103 | 0.8927 | 2 |
| 4.4407 | 0.0114 | 0.8311 | 3.7252 | 0.0107 | 0.8775 | 3 |
| 4.2445 | 0.0119 | 0.8228 | 3.6283 | 0.0108 | 0.8695 | 4 |
| 4.0889 | 0.0123 | 0.8067 | 3.5310 | 0.0110 | 0.8916 | 5 |
| 3.9575 | 0.0127 | 0.7908 | 3.4478 | 0.0113 | 0.8407 | 6 |
| 3.8547 | 0.0130 | 0.7781 | 3.4227 | 0.0113 | 0.8670 | 7 |
| 3.7599 | 0.0133 | 0.7654 | 3.3519 | 0.0115 | 0.8375 | 8 |
| 3.6763 | 0.0136 | 0.7543 | 3.3183 | 0.0116 | 0.8678 | 9 |
| 3.6006 | 0.0138 | 0.7421 | 3.2581 | 0.0117 | 0.8120 | 10 |
| 3.5300 | 0.0140 | 0.7296 | 3.2415 | 0.0118 | 0.8257 | 11 |
| 3.4554 | 0.0143 | 0.7179 | 3.2163 | 0.0119 | 0.8078 | 12 |
| 3.3930 | 0.0145 | 0.7057 | 3.1612 | 0.0121 | 0.7758 | 13 |
| 3.3218 | 0.0148 | 0.6946 | 3.1357 | 0.0122 | 0.7760 | 14 |
| 3.2424 | 0.0151 | 0.6806 | 3.0812 | 0.0123 | 0.7639 | 15 |
| 3.1577 | 0.0155 | 0.6633 | 3.0193 | 0.0126 | 0.7428 | 16 |
| 3.0655 | 0.0159 | 0.6454 | 2.9643 | 0.0128 | 0.7423 | 17 |
| 2.9579 | 0.0164 | 0.6271 | 2.8510 | 0.0132 | 0.7103 | 18 |
| 2.8149 | 0.0170 | 0.6022 | 2.7020 | 0.0136 | 0.6811 | 19 |
| 2.6475 | 0.0178 | 0.5775 | 2.5406 | 0.0142 | 0.6495 | 20 |
| 2.4340 | 0.0189 | 0.5451 | 2.3364 | 0.0149 | 0.6166 | 21 |
| 2.2002 | 0.0200 | 0.5065 | 2.1300 | 0.0155 | 0.5766 | 22 |
| 1.9511 | 0.0213 | 0.4658 | 1.9335 | 0.0162 | 0.5419 | 23 |
| 1.6777 | 0.0228 | 0.4184 | 1.7327 | 0.0169 | 0.5071 | 24 |
| 1.4282 | 0.0243 | 0.3754 | 1.5461 | 0.0176 | 0.4669 | 25 |
| 1.2219 | 0.0255 | 0.3365 | 1.4027 | 0.0181 | 0.4326 | 26 |
| 1.0535 | 0.0265 | 0.3016 | 1.2979 | 0.0185 | 0.4134 | 27 |
| 0.9205 | 0.0274 | 0.2731 | 1.1891 | 0.0189 | 0.3843 | 28 |
| 0.8079 | 0.0281 | 0.2453 | 1.1135 | 0.0192 | 0.3659 | 29 |
| 0.7140 | 0.0288 | 0.2218 | 1.0532 | 0.0195 | 0.3495 | 30 |
| 0.6318 | 0.0293 | 0.1975 | 0.9976 | 0.0197 | 0.3351 | 31 |
| 0.5623 | 0.0298 | 0.1770 | 0.9571 | 0.0199 | 0.3256 | 32 |
| 0.4990 | 0.0303 | 0.1582 | 0.9184 | 0.0200 | 0.3147 | 33 |
| 0.4444 | 0.0307 | 0.1424 | 0.8865 | 0.0202 | 0.3062 | 34 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
bigmorning/whisper_input_decoder_shift_r_labels_with_force__0030
|
bigmorning
| 2023-09-05T07:28:36Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T07:28:28Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_with_force__0030
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_with_force__0030
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8079
- Train Accuracy: 0.0281
- Train Wermet: 0.2453
- Validation Loss: 1.1135
- Validation Accuracy: 0.0192
- Validation Wermet: 0.3659
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6249 | 0.0091 | 1.7162 | 4.2965 | 0.0094 | 0.9447 | 0 |
| 4.9223 | 0.0099 | 0.9041 | 4.1562 | 0.0097 | 0.9327 | 1 |
| 4.6814 | 0.0107 | 0.8376 | 3.9245 | 0.0103 | 0.8927 | 2 |
| 4.4407 | 0.0114 | 0.8311 | 3.7252 | 0.0107 | 0.8775 | 3 |
| 4.2445 | 0.0119 | 0.8228 | 3.6283 | 0.0108 | 0.8695 | 4 |
| 4.0889 | 0.0123 | 0.8067 | 3.5310 | 0.0110 | 0.8916 | 5 |
| 3.9575 | 0.0127 | 0.7908 | 3.4478 | 0.0113 | 0.8407 | 6 |
| 3.8547 | 0.0130 | 0.7781 | 3.4227 | 0.0113 | 0.8670 | 7 |
| 3.7599 | 0.0133 | 0.7654 | 3.3519 | 0.0115 | 0.8375 | 8 |
| 3.6763 | 0.0136 | 0.7543 | 3.3183 | 0.0116 | 0.8678 | 9 |
| 3.6006 | 0.0138 | 0.7421 | 3.2581 | 0.0117 | 0.8120 | 10 |
| 3.5300 | 0.0140 | 0.7296 | 3.2415 | 0.0118 | 0.8257 | 11 |
| 3.4554 | 0.0143 | 0.7179 | 3.2163 | 0.0119 | 0.8078 | 12 |
| 3.3930 | 0.0145 | 0.7057 | 3.1612 | 0.0121 | 0.7758 | 13 |
| 3.3218 | 0.0148 | 0.6946 | 3.1357 | 0.0122 | 0.7760 | 14 |
| 3.2424 | 0.0151 | 0.6806 | 3.0812 | 0.0123 | 0.7639 | 15 |
| 3.1577 | 0.0155 | 0.6633 | 3.0193 | 0.0126 | 0.7428 | 16 |
| 3.0655 | 0.0159 | 0.6454 | 2.9643 | 0.0128 | 0.7423 | 17 |
| 2.9579 | 0.0164 | 0.6271 | 2.8510 | 0.0132 | 0.7103 | 18 |
| 2.8149 | 0.0170 | 0.6022 | 2.7020 | 0.0136 | 0.6811 | 19 |
| 2.6475 | 0.0178 | 0.5775 | 2.5406 | 0.0142 | 0.6495 | 20 |
| 2.4340 | 0.0189 | 0.5451 | 2.3364 | 0.0149 | 0.6166 | 21 |
| 2.2002 | 0.0200 | 0.5065 | 2.1300 | 0.0155 | 0.5766 | 22 |
| 1.9511 | 0.0213 | 0.4658 | 1.9335 | 0.0162 | 0.5419 | 23 |
| 1.6777 | 0.0228 | 0.4184 | 1.7327 | 0.0169 | 0.5071 | 24 |
| 1.4282 | 0.0243 | 0.3754 | 1.5461 | 0.0176 | 0.4669 | 25 |
| 1.2219 | 0.0255 | 0.3365 | 1.4027 | 0.0181 | 0.4326 | 26 |
| 1.0535 | 0.0265 | 0.3016 | 1.2979 | 0.0185 | 0.4134 | 27 |
| 0.9205 | 0.0274 | 0.2731 | 1.1891 | 0.0189 | 0.3843 | 28 |
| 0.8079 | 0.0281 | 0.2453 | 1.1135 | 0.0192 | 0.3659 | 29 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
Vibharkchauhan/it_ticket_classification_llama2
|
Vibharkchauhan
| 2023-09-05T07:27:31Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T07:27:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
trieudemo11/llama_7b_attrb_cate_big_l280_10
|
trieudemo11
| 2023-09-05T07:24:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T07:24:17Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
- PEFT 0.6.0.dev0
|
Lak31/outputs
|
Lak31
| 2023-09-05T07:24:03Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:TheBloke/Llama-2-7B-Chat-GPTQ",
"base_model:finetune:TheBloke/Llama-2-7B-Chat-GPTQ",
"license:other",
"region:us"
] | null | 2023-09-05T07:23:53Z |
---
license: other
base_model: TheBloke/Llama-2-7b-Chat-GPTQ
tags:
- generated_from_trainer
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [TheBloke/Llama-2-7b-Chat-GPTQ](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 30
### Training results
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
s3nh/openlm-research-open_llama_3b_v2-GGUF
|
s3nh
| 2023-09-05T07:19:07Z | 1 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-05T07:14:30Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/openlm-research/open_llama_3b_v2).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
bigmorning/whisper_input_decoder_shift_r_labels_with_force__0025
|
bigmorning
| 2023-09-05T07:15:20Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T07:15:12Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_with_force__0025
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_with_force__0025
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6777
- Train Accuracy: 0.0228
- Train Wermet: 0.4184
- Validation Loss: 1.7327
- Validation Accuracy: 0.0169
- Validation Wermet: 0.5071
- Epoch: 24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6249 | 0.0091 | 1.7162 | 4.2965 | 0.0094 | 0.9447 | 0 |
| 4.9223 | 0.0099 | 0.9041 | 4.1562 | 0.0097 | 0.9327 | 1 |
| 4.6814 | 0.0107 | 0.8376 | 3.9245 | 0.0103 | 0.8927 | 2 |
| 4.4407 | 0.0114 | 0.8311 | 3.7252 | 0.0107 | 0.8775 | 3 |
| 4.2445 | 0.0119 | 0.8228 | 3.6283 | 0.0108 | 0.8695 | 4 |
| 4.0889 | 0.0123 | 0.8067 | 3.5310 | 0.0110 | 0.8916 | 5 |
| 3.9575 | 0.0127 | 0.7908 | 3.4478 | 0.0113 | 0.8407 | 6 |
| 3.8547 | 0.0130 | 0.7781 | 3.4227 | 0.0113 | 0.8670 | 7 |
| 3.7599 | 0.0133 | 0.7654 | 3.3519 | 0.0115 | 0.8375 | 8 |
| 3.6763 | 0.0136 | 0.7543 | 3.3183 | 0.0116 | 0.8678 | 9 |
| 3.6006 | 0.0138 | 0.7421 | 3.2581 | 0.0117 | 0.8120 | 10 |
| 3.5300 | 0.0140 | 0.7296 | 3.2415 | 0.0118 | 0.8257 | 11 |
| 3.4554 | 0.0143 | 0.7179 | 3.2163 | 0.0119 | 0.8078 | 12 |
| 3.3930 | 0.0145 | 0.7057 | 3.1612 | 0.0121 | 0.7758 | 13 |
| 3.3218 | 0.0148 | 0.6946 | 3.1357 | 0.0122 | 0.7760 | 14 |
| 3.2424 | 0.0151 | 0.6806 | 3.0812 | 0.0123 | 0.7639 | 15 |
| 3.1577 | 0.0155 | 0.6633 | 3.0193 | 0.0126 | 0.7428 | 16 |
| 3.0655 | 0.0159 | 0.6454 | 2.9643 | 0.0128 | 0.7423 | 17 |
| 2.9579 | 0.0164 | 0.6271 | 2.8510 | 0.0132 | 0.7103 | 18 |
| 2.8149 | 0.0170 | 0.6022 | 2.7020 | 0.0136 | 0.6811 | 19 |
| 2.6475 | 0.0178 | 0.5775 | 2.5406 | 0.0142 | 0.6495 | 20 |
| 2.4340 | 0.0189 | 0.5451 | 2.3364 | 0.0149 | 0.6166 | 21 |
| 2.2002 | 0.0200 | 0.5065 | 2.1300 | 0.0155 | 0.5766 | 22 |
| 1.9511 | 0.0213 | 0.4658 | 1.9335 | 0.0162 | 0.5419 | 23 |
| 1.6777 | 0.0228 | 0.4184 | 1.7327 | 0.0169 | 0.5071 | 24 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
Hozier/ddpm-celebahq-finetuned-butterflies-2epochs
|
Hozier
| 2023-09-05T06:56:07Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-09-05T06:55:23Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
finetuned model from celebahq to butterflies
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Hozier/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
Envoid/Yousei-22B
|
Envoid
| 2023-09-05T06:53:37Z | 1,499 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2306.01708",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-04T05:14:01Z |
# This model may output NSFW content
## About
This model started as a block-diagonal [frankenllama merge](https://huggingface.co/chargoddard/llama2-22b) between [MythoMax13B](https://huggingface.co/Gryphe/MythoMax-L2-13b) and [Enterredaas33B](https://huggingface.co/Aeala/Enterredaas-33b)
However due to some anomaly likely caused by the novel methods used by MythoMax I was unable to initiate the LoRA training needed to bring the resulting model back to order.
Being a [Chronorctypus-Limarobormes](https://huggingface.co/chargoddard/Chronorctypus-Limarobormes-13b) enjoyer I decided to look further into the TIES-merging that it utilizes- as cited in the arXiv paper: [Resolving Interference When Merging Models](https://huggingface.co/papers/2306.01708
)
I used [llama2-22b](https://huggingface.co/chargoddard/llama2-22b) as the base model upon which I merged the MythoMax/Enterredaas frankenmerge, [Dendrite-II](https://huggingface.co/Envoid/Dendrite-II-22B) and [Bacchus](https://huggingface.co/Envoid/Bacchus-22B)
Suffice it to say due to being made of merges of merges Yousei-22B has the DNA of many different models in it, although the Llama-2-chat DNA from Dendrite-II and the DNA from MythoMax seem to shine the brightest.
## Usage
I no longer use simple-proxy-for-tavern as part of my testing stack and instead test on a combination of the kobold-lite UI and the latest version of silly-tavern (now with more advanced prompt formatting options)
When using the llama-2 prompt format it tends to produce a lot of refusals when requesting mature or controversial content. When using an alpaca-style format you don't get as many refusals, however they do happen. I have also not yet dialed in what the best generation parameter presets are.
It's decent at roleplay although sometimes wavers between chat style output and prose. Where this model really shines is instruct prescribed writing.
## Example
### Prompt
```
### Instruction:
Write a poem only a Yōsei could appreicate.
### Response:
```
### Reply
```
In moonlit glades, where shadows dance and play,
Where ancient magic fills the night away,
A world of wonder, unseen by day,
The yōsei's realm, in softest sway.
Beneath the boughs of trees so tall,
Their leaves like jewels, a treasure to enthrall,
The creatures gather, young and old,
To share their tales, their joys, their bold.
With furry friends, both great and small,
They frolic, wild and free for all,
Through woodland paths, they lead the way,
In this enchanted land of endless fray.
The stars above, their twinkling eyes,
Like fireflies that light the midnight skies,
Guide these mythical beings, wise and true,
As they weave their spells anew.
The rustle of leaves, the snap of twig,
The soft caress of silken wing,
Are music to their ears, a gentle hymn,
In this mystic forest, where they belong.
So let us revel in this place,
Where nature's secrets find their grace,
And bask in the ethereal glow,
Of the world that only yōsei know.
```
If you find you are getting refusals when using instruct prompts you can almost always get around the refusal like this:
```
### Instruction:
Write a story
### Response:
Certainly! here's a story:
```
|
oljike/jackson_model
|
oljike
| 2023-09-05T06:49:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:dreamlike-art/dreamlike-photoreal-2.0",
"base_model:finetune:dreamlike-art/dreamlike-photoreal-2.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-09-04T08:17:52Z |
---
license: creativeml-openrail-m
base_model: dreamlike-art/dreamlike-photoreal-2.0
instance_prompt: jacksondavis person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - oljike/jackson_model
This is a dreambooth model derived from dreamlike-art/dreamlike-photoreal-2.0. The weights were trained on jacksondavis person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Shivam098/opt-translation
|
Shivam098
| 2023-09-05T06:46:59Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:opus100",
"base_model:Shivam098/opt-translation",
"base_model:quantized:Shivam098/opt-translation",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text2text-generation
| 2023-08-29T07:52:03Z |
---
base_model: Shivam098/opt-translation
tags:
- generated_from_trainer
datasets:
- opus100
model-index:
- name: opt-Translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-Translation
This model is a fine-tuned version of [Shivam098/opt-translation](https://huggingface.co/Shivam098/opt-translation) on the opus100 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 300
### Training results
### Framework versions
- Transformers 4.33.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
prognosis/cardio-llama-2-7b-miniguanaco-guideline-lora-v15
|
prognosis
| 2023-09-05T06:38:04Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T06:36:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
neggles/war-crimes
|
neggles
| 2023-09-05T06:35:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-23T11:02:58Z |
---
license: creativeml-openrail-m
---
|
KrishnaKoustubBS/llama2-qlora-finetunined-french
|
KrishnaKoustubBS
| 2023-09-05T06:31:23Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T06:31:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
CzarnyRycerz/pixelcopter-model-2
|
CzarnyRycerz
| 2023-09-05T06:29:38Z | 0 | 2 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-05T05:27:41Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: pixelcopter-model-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 91.50 +/- 55.68
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
TomyAI/juniorbra
|
TomyAI
| 2023-09-05T06:28:16Z | 0 | 19 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-05T04:35:54Z |
---
license: creativeml-openrail-m
---
可愛い下着のLoRAです。
プロンプト:
bra:普通の三角形
camisole bra:キャミソール型(紐ストラップ)
tank top bra:タンクトップ型(幅広ストラップ)
sports bra:スポーツタイプ。タンクトップとの違いは気休め程度
tube top bra:チューブトップ型
y-back bra:Yバック型(効きは悪い。気休め程度に)
u-back bra:Uバック型
panties:パンティ
shorts:トランクス~ボクサー型パンティ
piping:縁取り
printed:模様入り


|
nbogdan/flant5-large-2ex-bridging-3epochs
|
nbogdan
| 2023-09-05T06:22:57Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"t5",
"adapterhub:self-explanations",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T06:22:45Z |
---
tags:
- adapter-transformers
- t5
- adapterhub:self-explanations
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-large-2ex-bridging-3epochs` for google/flan-t5-large
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-large")
adapter_name = model.load_adapter("nbogdan/flant5-large-2ex-bridging-3epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
bigmorning/whisper_input_decoder_shift_r_labels_with_force__0005
|
bigmorning
| 2023-09-05T06:22:22Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"whisper",
"automatic-speech-recognition",
"generated_from_keras_callback",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-05T06:22:14Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_keras_callback
model-index:
- name: whisper_input_decoder_shift_r_labels_with_force__0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_input_decoder_shift_r_labels_with_force__0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.2445
- Train Accuracy: 0.0119
- Train Wermet: 0.8228
- Validation Loss: 3.6283
- Validation Accuracy: 0.0108
- Validation Wermet: 0.8695
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.6249 | 0.0091 | 1.7162 | 4.2965 | 0.0094 | 0.9447 | 0 |
| 4.9223 | 0.0099 | 0.9041 | 4.1562 | 0.0097 | 0.9327 | 1 |
| 4.6814 | 0.0107 | 0.8376 | 3.9245 | 0.0103 | 0.8927 | 2 |
| 4.4407 | 0.0114 | 0.8311 | 3.7252 | 0.0107 | 0.8775 | 3 |
| 4.2445 | 0.0119 | 0.8228 | 3.6283 | 0.0108 | 0.8695 | 4 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Tokenizers 0.13.3
|
premai-io/CodeLlama-34b-Instruct-hf
|
premai-io
| 2023-09-05T06:17:42Z | 17 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-2",
"code",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-01T15:22:43Z |
---
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
license: llama2
---
# Code Llama for Petals
Resharded Code Llama repository optimized for Petals inference. Instead of having 7 shards of ~10GiB each, the current repository has 49 shards each one of ~1.5GiB.
For more information about the model, you can check the official card [here](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf).
# Getting Started
```python
# pip install git+https://github.com/huggingface/transformers.git@main accelerate
from transformers import LlamaTokenizer, AutoModelForCausalLM
tokenizer = LlamaTokenizer.from_pretrained("premai-io/CodeLlama-34b-Instruct-hf")
model = AutoModelForCausalLM.from_pretrained("premai-io/CodeLlama-34b-Instruct-hf")
inputs = tokenizer("def hello_world():", return_tensors="pt")["input_ids"]
outputs = model.generate(inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
```
# Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at https://ai.meta.com/llama/responsible-user-guide.
|
nbogdan/flant5-large-1ex-elaboration-1epochs
|
nbogdan
| 2023-09-05T06:11:11Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T06:10:59Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-large-1ex-elaboration-1epochs` for google/flan-t5-large
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-large")
adapter_name = model.load_adapter("nbogdan/flant5-large-1ex-elaboration-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
rohitpanjwani/base_model_ep_20
|
rohitpanjwani
| 2023-09-05T05:44:33Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-04T12:09:13Z |
# Model Name
model_name = "google/flan-t5-base"
# Parameter for AutoTokenizer
pad_to_max_length_GD = False
max_length_GD = 64
# Parameter for DataCollator
padding_in_datacollator = 'longest'
# Training Arguments
batch_size = 4
lr_rate = 1e-5
num_epoch = 20
weight_decay = 0.01
fp16 = False # Make true if GPU is there
predict_with_generate=True
eval_steps = 1000
save_steps = 1000
load_best_model_at_end=True
# Generating output
temperature = 1.5
max_len = 256
bum_beam = 50
|
nbogdan/flant5-large-1ex-paraphrasing-1epochs
|
nbogdan
| 2023-09-05T05:25:04Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T05:24:50Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-large-1ex-paraphrasing-1epochs` for google/flan-t5-large
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-large")
adapter_name = model.load_adapter("nbogdan/flant5-large-1ex-paraphrasing-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
aditira/audio_classification
|
aditira
| 2023-09-05T05:16:04Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-09-05T04:54:03Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: audio_classification
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.09734513274336283
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# audio_classification
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6366
- Accuracy: 0.0973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6366 | 0.0973 |
| No log | 1.87 | 7 | 2.6403 | 0.0531 |
| 2.6373 | 2.93 | 11 | 2.6460 | 0.0708 |
| 2.6373 | 4.0 | 15 | 2.6461 | 0.0796 |
| 2.6373 | 4.8 | 18 | 2.6483 | 0.0708 |
| 2.6225 | 5.87 | 22 | 2.6474 | 0.0531 |
| 2.6225 | 6.93 | 26 | 2.6475 | 0.0442 |
| 2.6225 | 8.0 | 30 | 2.6492 | 0.0354 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
DKYoon/mt5-small-lm-adapt
|
DKYoon
| 2023-09-05T05:08:01Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:2205.12647",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-04-13T18:22:05Z |
---
license: apache-2.0
---
🤗 Language model initialized from mT5 and trained for an additional 100K steps on the Prefix LM objective using mC4 data.
Paper: [Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation](https://arxiv.org/abs/2205.12647)
Authors: Tu Vu, Aditya Barua, Brian Lester, Daniel Cer, Mohit Iyyer, Noah Constant
PyTorch port of the original Flax checkpoint at [Google/T5X repository](https://github.com/google-research/t5x).
|
Haimi/ArtGAN
|
Haimi
| 2023-09-05T05:03:22Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2023-09-04T11:35:15Z |
---
license: cc-by-nc-nd-4.0
---
License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
NonCommercial — You may not use the material for commercial purposes.
NoDerivatives — If you remix, transform, or build upon the material, you may not distribute the modified material.
Acknowledgements
Personal creation
|
aditira/text_classification
|
aditira
| 2023-09-05T04:53:12Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-05T03:25:38Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: text_classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2237
- Accuracy: 0.9156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2238 | 1.0 | 1563 | 0.2237 | 0.9156 |
| 0.1507 | 2.0 | 3126 | 0.2243 | 0.9313 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
nbogdan/flant5-large-1ex-overall-1epochs
|
nbogdan
| 2023-09-05T04:47:13Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T04:47:01Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-large-1ex-overall-1epochs` for google/flan-t5-large
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-large")
adapter_name = model.load_adapter("nbogdan/flant5-large-1ex-overall-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
Jaya1995/maintenance
|
Jaya1995
| 2023-09-05T04:30:51Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T04:30:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
jschew39/marketmail_model
|
jschew39
| 2023-09-05T04:25:34Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T04:25:33Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Aryanne/OmegLLaMA-3B-ggml-and-gguf
|
Aryanne
| 2023-09-05T04:14:56Z | 46 | 1 | null |
[
"gguf",
"text-generation",
"en",
"dataset:anon8231489123/Omegle_logs_dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-04T17:59:27Z |
---
license: apache-2.0
datasets:
- anon8231489123/Omegle_logs_dataset
language:
- en
pipeline_tag: text-generation
---
My first quantization, this is a q4_0 GGML(ggjtv3) and GGUFv2 quantization of the model https://huggingface.co/acrastt/OmegLLaMA-3B
I hope it's working fine. 🤗
Prompt format:
```
Interests: {interests}
Conversation:
You: {prompt}
Stranger:
```
|
nbogdan/flant5-large-0ex-bridging-1epochs
|
nbogdan
| 2023-09-05T04:11:11Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T04:10:59Z |
---
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-large-0ex-bridging-1epochs` for google/flan-t5-large
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-large")
adapter_name = model.load_adapter("nbogdan/flant5-large-0ex-bridging-1epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
xszhou/Taxi-v3
|
xszhou
| 2023-09-05T03:57:30Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-05T03:57:29Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="xszhou/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
amitsbhatidados/out_model
|
amitsbhatidados
| 2023-09-05T03:50:52Z | 53 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-31T10:52:39Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: out_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# out_model
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
JasonTheDeveloper/squad-bloom-1b1
|
JasonTheDeveloper
| 2023-09-05T03:28:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T03:28:27Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.6.0.dev0
|
nbogdan/flant5-large-2ex-elaboration-3epochs
|
nbogdan
| 2023-09-05T03:13:00Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"t5",
"adapterhub:self-explanations",
"dataset:self-explanations",
"region:us"
] | null | 2023-09-05T03:11:57Z |
---
tags:
- adapter-transformers
- t5
- adapterhub:self-explanations
datasets:
- self-explanations
---
# Adapter `nbogdan/flant5-large-2ex-elaboration-3epochs` for google/flan-t5-large
An [adapter](https://adapterhub.ml) for the `google/flan-t5-large` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-large")
adapter_name = model.load_adapter("nbogdan/flant5-large-2ex-elaboration-3epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
nightdude/config_800352
|
nightdude
| 2023-09-05T03:11:11Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T01:47:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
nightdude/config_800353
|
nightdude
| 2023-09-05T02:59:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-09-05T01:58:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
nlappas/q-FrozenLake-v1-4x4-noSlippery
|
nlappas
| 2023-09-05T02:55:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-09-05T02:55:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nlappas/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
team-lucid/hubert-base-korean
|
team-lucid
| 2023-09-05T02:55:16Z | 441 | 25 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"hubert",
"feature-extraction",
"speech",
"audio",
"automatic-speech-recognition",
"custom_code",
"ko",
"arxiv:2106.07447",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-29T12:00:30Z |
---
license: apache-2.0
language:
- ko
library_name: transformers
pipeline_tag: automatic-speech-recognition
tags:
- speech
- audio
---
# hubert-base-korean
## Model Details
Hubert(Hidden-Unit BERT)는 Facebook에서 제안한 Speech Representation Learning 모델입니다.
Hubert는 기존의 음성 인식 모델과 달리, 음성 신호를 raw waveform에서 바로 학습하는 self-supervised learning 방식을 사용합니다.
이 연구는 구글의 TPU Research Cloud(TRC)를 통해 지원받은 Cloud TPU로 학습되었습니다.
### Model Description
<table>
<tr>
<td colspan="2"></td>
<td>Base</td>
<td>Large</td>
</tr>
<tr>
<td rowspan="3">CNN Encoder</td>
<td>strides</td>
<td colspan="2">5, 2, 2, 2, 2, 2, 2</td>
</tr>
<tr>
<td>kernel width</td>
<td colspan="2">10, 3, 3, 3, 3, 2, 2</td>
</tr>
<tr>
<td>channel</td>
<td colspan="2">512</td>
</tr>
<tr>
<td rowspan="4">Transformer Encoder</td>
<td>Layer</td>
<td>12</td>
<td>24</td>
</tr>
<tr>
<td>embedding dim</td>
<td>768</td>
<td>1024</td>
</tr>
<tr>
<td>inner FFN dim</td>
<td>3072</td>
<td>4096</td>
</tr>
<tr>
<td>attention heads</td>
<td>8</td>
<td>16</td>
</tr>
<tr>
<td>Projection</td>
<td>dim</td>
<td>256</td>
<td>768</td>
</tr>
<tr>
<td colspan="2">Params</td>
<td>95M</td>
<td>317M </td>
</tr>
</table>
## How to Get Started with the Model
### Pytorch
```py
import torch
from transformers import HubertModel
model = HubertModel.from_pretrained("team-lucid/hubert-base-korean")
wav = torch.ones(1, 16000)
outputs = model(wav)
print(f"Input: {wav.shape}") # [1, 16000]
print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768]
```
### JAX/Flax
```py
import jax.numpy as jnp
from transformers import FlaxAutoModel
model = FlaxAutoModel.from_pretrained("team-lucid/hubert-base-korean", trust_remote_code=True)
wav = jnp.ones((1, 16000))
outputs = model(wav)
print(f"Input: {wav.shape}") # [1, 16000]
print(f"Output: {outputs.last_hidden_state.shape}") # [1, 49, 768]
```
## Training Details
### Training Data
해당 모델은 과학기술정보통신부의 재원으로 한국지능정보사회진흥원의 지원을 받아
구축된 [자유대화 음성(일반남여)](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=109), [다화자 음성합성 데이터](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=542), [방송 콘텐츠 대화체 음성인식 데이터](https://www.aihub.or.kr/aihubdata/data/view.do?dataSetSn=463)
에서 약 4,000시간을 추출해 학습되었습니다.
### Training Procedure
[원 논문](https://arxiv.org/pdf/2106.07447.pdf)과 동일하게 MFCC 기반으로 Base 모델을 학습한 다음, 500 cluster로 k-means를 수행해 다시 Base와
Large 모델을 학습했습니다.
#### Training Hyperparameters
| Hyperparameter | Base | Large |
|:--------------------|---------|--------:|
| Warmup Steps | 32,000 | 32,000 |
| Learning Rates | 5e-4 | 1.5e-3 |
| Batch Size | 128 | 128 |
| Weight Decay | 0.01 | 0.01 |
| Max Steps | 400,000 | 400,000 |
| Learning Rate Decay | 0.1 | 0.1 |
| \\(Adam\beta_1\\) | 0.9 | 0.9 |
| \\(Adam\beta_2\\) | 0.99 | 0.99 |
|
dmatekenya/wav2vec2-large-xls-r-1b-chichewa
|
dmatekenya
| 2023-09-05T02:41:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-1b",
"base_model:finetune:facebook/wav2vec2-xls-r-1b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-09-04T20:01:56Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-1b
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-1b-chichewa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1b-chichewa
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.8481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2567 | 3.51 | 400 | inf | 0.9449 |
| 1.476 | 7.02 | 800 | inf | 0.8481 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
alex1qaz/bert-finetuned-goodsmemo-ner
|
alex1qaz
| 2023-09-05T02:30:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:goodsmemo",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-09-05T02:16:14Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- goodsmemo
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-goodsmemo-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: goodsmemo
type: goodsmemo
config: googdsmemo
split: validation
args: googdsmemo
metrics:
- name: Precision
type: precision
value: 0.14545454545454545
- name: Recall
type: recall
value: 0.14953271028037382
- name: F1
type: f1
value: 0.14746543778801846
- name: Accuracy
type: accuracy
value: 0.9293815536058206
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-goodsmemo-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the goodsmemo dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1899
- Precision: 0.1455
- Recall: 0.1495
- F1: 0.1475
- Accuracy: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 46 | 0.3317 | 0.0 | 0.0 | 0.0 | 0.9018 |
| No log | 2.0 | 92 | 0.3051 | 0.0090 | 0.0280 | 0.0137 | 0.8640 |
| No log | 3.0 | 138 | 0.2561 | 0.0207 | 0.0467 | 0.0287 | 0.8966 |
| No log | 4.0 | 184 | 0.2345 | 0.0383 | 0.0748 | 0.0506 | 0.9118 |
| No log | 5.0 | 230 | 0.2319 | 0.0491 | 0.1028 | 0.0665 | 0.9018 |
| No log | 6.0 | 276 | 0.2108 | 0.1085 | 0.1308 | 0.1186 | 0.9245 |
| No log | 7.0 | 322 | 0.2042 | 0.1181 | 0.1402 | 0.1282 | 0.9268 |
| No log | 8.0 | 368 | 0.2077 | 0.1262 | 0.1215 | 0.1238 | 0.9263 |
| No log | 9.0 | 414 | 0.1951 | 0.1524 | 0.1495 | 0.1509 | 0.9297 |
| No log | 10.0 | 460 | 0.1899 | 0.1455 | 0.1495 | 0.1475 | 0.9294 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
ThuyNT03/xlm-roberta-base-Final_VietNam-aug_backtranslation-2
|
ThuyNT03
| 2023-09-05T02:23:18Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-04T23:35:42Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-Final_VietNam-aug_backtranslation-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Final_VietNam-aug_backtranslation-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3357
- Accuracy: 0.66
- F1: 0.6673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0185 | 1.0 | 86 | 0.8134 | 0.65 | 0.5550 |
| 0.6948 | 2.0 | 172 | 0.9228 | 0.65 | 0.6376 |
| 0.5272 | 3.0 | 258 | 0.9715 | 0.69 | 0.6920 |
| 0.3985 | 4.0 | 344 | 1.0097 | 0.7 | 0.7042 |
| 0.3273 | 5.0 | 430 | 1.0340 | 0.7 | 0.7067 |
| 0.2035 | 6.0 | 516 | 1.1582 | 0.68 | 0.6870 |
| 0.1705 | 7.0 | 602 | 1.2932 | 0.66 | 0.6673 |
| 0.1303 | 8.0 | 688 | 1.3357 | 0.66 | 0.6673 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
dhmeltzer/llama-7b-SFT-qlora-eli5-wiki_DPO_ds_RM_contrast_1024_r_64_alpha_16
|
dhmeltzer
| 2023-09-05T02:17:29Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:dhmeltzer/llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged",
"base_model:finetune:dhmeltzer/llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged",
"region:us"
] | null | 2023-08-30T02:06:09Z |
---
base_model: dhmeltzer/llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged
tags:
- generated_from_trainer
model-index:
- name: llama-7b-SFT-qlora-eli5-wiki_DPO_ds_RM_contrast_1024_r_64_alpha_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-SFT-qlora-eli5-wiki_DPO_ds_RM_contrast_1024_r_64_alpha_16
This model is a fine-tuned version of [dhmeltzer/llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged](https://huggingface.co/dhmeltzer/llama-7b-SFT_eli5_wiki65k_1024_r_64_alpha_16_merged) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6234
- Rewards/chosen: 0.0858
- Rewards/rejected: -0.1898
- Rewards/accuracies: 0.6574
- Rewards/margins: 0.2756
- Logps/rejected: -198.1188
- Logps/chosen: -205.4868
- Logits/rejected: 0.7931
- Logits/chosen: 0.8315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6867 | 0.1 | 19 | 0.6390 | 0.0633 | -0.1318 | 0.6451 | 0.1951 | -197.8286 | -205.5991 | 0.7774 | 0.8133 |
| 0.6727 | 0.21 | 38 | 0.6384 | 0.0354 | -0.2285 | 0.6529 | 0.2639 | -198.3123 | -205.7386 | 0.8054 | 0.8432 |
| 0.6577 | 0.31 | 57 | 0.6391 | -0.0114 | -0.2258 | 0.6406 | 0.2145 | -198.2988 | -205.9725 | 0.7954 | 0.8346 |
| 0.6609 | 0.42 | 76 | 0.6344 | -0.3737 | -0.6175 | 0.6417 | 0.2438 | -200.2571 | -207.7841 | 0.7818 | 0.8194 |
| 0.6536 | 0.52 | 95 | 0.6285 | -0.1130 | -0.3816 | 0.6652 | 0.2687 | -199.0778 | -206.4805 | 0.7958 | 0.8350 |
| 0.654 | 0.62 | 114 | 0.6342 | 0.0007 | -0.2311 | 0.6484 | 0.2318 | -198.3250 | -205.9122 | 0.7917 | 0.8303 |
| 0.6435 | 0.73 | 133 | 0.6258 | 0.0462 | -0.2234 | 0.6562 | 0.2696 | -198.2865 | -205.6845 | 0.7949 | 0.8332 |
| 0.6508 | 0.83 | 152 | 0.6234 | 0.0858 | -0.1898 | 0.6574 | 0.2756 | -198.1188 | -205.4868 | 0.7931 | 0.8315 |
| 0.6361 | 0.94 | 171 | 0.6269 | 0.1007 | -0.1655 | 0.6618 | 0.2662 | -197.9971 | -205.4121 | 0.7975 | 0.8353 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Chilli-b/test2train_amine_face
|
Chilli-b
| 2023-09-05T02:08:49Z | 53 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-09-01T07:19:23Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
---
# English Version
## Model Card for My First pre-trained model -- test2train_anime_face
This model is a diffusion model for unconditional image generation of anime style 64*64 face pic.
The training set uses [anime-faces](https://huggingface.co/datasets/huggan/anime-faces). This is a dataset consisting of 21551 anime faces scraped from www.getchu.com, which are then cropped using the anime face detection algorithm in https://github.com/nagadomi/lbpcascade_animeface.
Generating multiple pictures at once is prone to broken face. It has been tested that one picture at a time produces the best results and is not prone to broken faces.


### Usage
```python
from diffusers import DDPMPipeline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pipeline = DDPMPipeline.from_pretrained('Chilli-b/test2train_amine_face').to(device)
image = pipeline().images[0]
image
```
---
# 中文版
## 这是我创造的第一个预训练模型—— test2train_anime_face 的模型卡。
该模型是一个无条件扩散模型,用于生成尺寸为 64*64 的动漫风格脸部图片。
训练集使用的是[anime-faces](https://huggingface.co/datasets/huggan/anime-faces),这是一个包含从 www.getchu.com 上爬取的21551个动漫脸,然后使用 https://github.com/nagadomi/lbpcascade_animeface 中的动漫脸检测算法进行裁剪的数据集。
一次生成多张容易出现鬼脸。实测每次出一张图的效果最好,不容易出现鬼脸。


### 模型使用
```python
from diffusers import DDPMPipeline
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
pipeline = DDPMPipeline.from_pretrained('Chilli-b/test2train_amine_face').to(device)
image = pipeline().images[0]
image
```
|
Guanglong/mojing-llm-7b
|
Guanglong
| 2023-09-05T02:05:44Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-02T01:20:00Z |
---
license: apache-2.0
---
We have used mojing-llm dataset(https://huggingface.co/datasets/Guanglong/mojing-llm) to sft finetune this model on llama-2-7b.<br />
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.