modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-05 00:41:53
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-05 00:41:31
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
DexoXeck/Spongebob-Dialougue-RVC2
|
DexoXeck
| 2023-06-26T01:39:31Z | 0 | 0 | null |
[
"license:cc-by-4.0",
"region:us"
] | null | 2023-06-25T19:39:39Z |
---
license: cc-by-4.0
---
Made by SEP64 Productions. (Please credit SEP64's discord when model is used.)
A tiny more than an hour of dataset audio and 500 epochs, took like
3 or 4 weeks to render cuz I used google collab and didn't want
to pay for GPU.
Thank you for using my model!
Also, thanks MrAK2006 for converting my model into a zip that actually works LOL.
|
realmplay/Charybdis-v1.0
|
realmplay
| 2023-06-26T01:39:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-23T08:06:47Z |
<img src="https://media.discordapp.net/attachments/1093298155627491368/1122627585680093254/tyfvenom_A_mythical_and_futuristic_representation_of_Charybdis__18d006ca-45e0-46eb-a6d3-62d06432b4f1.png?width=905&height=905" alt="Image description" width="400" height="400">
# Charybdis v1.0
### A Groundbreaking LLM that redefines roleplaying with unparalleled coherence, 16k context support, and complete uncensorship.
### Experience epic, immersive narratives driven by advanced algorithms and state-of-the-art AI technology, without any limitations.
|
medmac01/moroccan-qa-falcon-7b
|
medmac01
| 2023-06-26T01:11:04Z | 15 | 0 |
transformers
|
[
"transformers",
"RefinedWebModel",
"text-generation",
"history",
"custom_code",
"en",
"fr",
"dataset:medmac01/moroccan_history_qa",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2023-06-11T15:11:33Z |
---
datasets:
- medmac01/moroccan_history_qa
language:
- en
- fr
library_name: transformers
tags:
- history
---
|
arshiahemmat/NewsPredictor
|
arshiahemmat
| 2023-06-26T00:56:02Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T00:01:28Z |
# Persian News Classification Model
This project presents a machine learning model trained on a dataset of over 25,000 Persian news articles. The model is designed to classify news articles into one of seven categories: Sport, Science, Culture, Politics, International, Economic, and Social.
## Dataset
The dataset used for this project consists of more than 25,000 Persian news articles. These articles are categorized into seven distinct categories, providing a diverse range of topics for the model to learn from. The categories are as follows:
1. Sport
2. Science
3. Culture
4. Politics
5. International
6. Economic
7. Social
## Model
The model has been trained on this extensive dataset, learning to identify and understand the nuances of each category. This allows it to accurately classify new, unseen Persian news articles into the appropriate category.
## Usage
To use this model, simply input a Persian news article and the model will output the predicted category. This can be useful for a variety of applications, such as news aggregation services, content recommendation systems, and more.
## Future Work
We plan to continuously improve and update this model, incorporating more data and refining the model's architecture to increase its accuracy and efficiency.
## Contributions
Contributions to this project are welcome. If you have suggestions or improvements, feel free to open an issue or submit a pull request.
|
dean-r/ppo-LunarLander-v2-w1
|
dean-r
| 2023-06-26T00:39:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T23:22:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.39 +/- 20.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
boltzmein/test-partweet
|
boltzmein
| 2023-06-26T00:37:56Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-25T20:42:12Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: boltzmein/test-partweet
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# boltzmein/test-partweet
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3108
- Train Accuracy: 0.8661
- Validation Loss: 0.4225
- Validation Accuracy: 0.7964
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1497, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6942 | 0.5404 | 0.7014 | 0.4524 | 0 |
| 0.5601 | 0.6951 | 0.4631 | 0.7844 | 1 |
| 0.3108 | 0.8661 | 0.4225 | 0.7964 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nomad-ai/ppo-SnowballTarget
|
nomad-ai
| 2023-06-26T00:36:28Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-26T00:36:22Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nomad-ai/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
orhay1/RVC_Rem_Inori_Minase
|
orhay1
| 2023-06-26T00:15:43Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-06-26T00:10:35Z |
---
license: openrail
---
RVC V2 Model for Rem - Re:zero , VA - Inori Minase
|
roa7n/llama_human_enhancers_ensembl
|
roa7n
| 2023-06-26T00:10:14Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-24T05:27:23Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
BoilerToad/pegasus-samsum
|
BoilerToad
| 2023-06-26T00:09:49Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T23:26:08Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7041 | 0.54 | 500 | 1.4828 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 2.0.0
- Tokenizers 0.13.3
|
cateto/korean-gpt-neox-125M
|
cateto
| 2023-06-25T23:16:48Z | 304 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"ko",
"license:cc-by-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-16T06:08:21Z |
---
license: cc-by-3.0
language:
- ko
pipeline_tag: text-generation
---
# korean-gpt-neox-125M
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [cateto](http://github.com/cateto)
- **Model type:** [gpt-neox](https://github.com/EleutherAI/gpt-neox)
- **Language(s) (NLP):** Korean
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
```python
# Import the transformers library
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("cateto/korean-gpt-neox-125M")
model = AutoModelForCausalLM.from_pretrained("cateto/korean-gpt-neox-125M")
# Get user input
user_input = "์ฐ๋ฆฌ๋ ์์ผ๋ก ๋๋์ ๋ฏธ๋๋ฅผ"
# Encode the prompt using the tokenizer
input_ids = tokenizer.encode(user_input, return_tensors="pt")
# Generate chatbot output using the model
output_ids = model.generate(
input_ids,
num_beams=4,
repetition_penalty=1.5,
no_repeat_ngram_size=3
)
# Decode chatbot output ids as text
bot_output = tokenizer.decode(output_ids.tolist()[0], skip_special_tokens=True)
# Print chatbot output
print(f"์ถ๋ ฅ ## ", bot_output)
# ์ถ๋ ฅ ## ์ฐ๋ฆฌ๋ ์์ผ๋ก ๋๋์ ๋ฏธ๋๋ฅผ ํฅํด ๋์๊ฐ ์ ์๋ค.
```
|
heinjan/TI-mobilenetv3-imagenet-v2-v1
|
heinjan
| 2023-06-25T23:09:52Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-25T23:06:23Z |
---
pipeline_tag: image-classification
---
|
Nams139/ppo-LunarLaner-v2
|
Nams139
| 2023-06-25T23:01:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T23:01:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.37 +/- 16.18
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mnicamartins8/bert-base-uncased-without-corrections
|
mnicamartins8
| 2023-06-25T23:01:26Z | 138 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-25T21:07:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-without-corrections
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-without-corrections
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2283
- Accuracy: 0.9070
- Precision: 0.9114
- Recall: 0.9070
- F1: 0.9086
- Balanced Acc: 0.8857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
heinjan/TI-mobilenetv3-imagenet-v2-v2
|
heinjan
| 2023-06-25T22:59:51Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-25T18:19:59Z |
---
pipeline_tag: image-classification
---
|
rodrigoclira/ppo-Pyramid
|
rodrigoclira
| 2023-06-25T22:51:01Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-25T22:50:59Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rodrigoclira/ppo-Pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
rodrigoclira/ppo-SnowballTarget
|
rodrigoclira
| 2023-06-25T22:07:56Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-25T22:07:53Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rodrigoclira/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
renyulin/opt125m-imdb-sft-lora8bit
|
renyulin
| 2023-06-25T21:57:38Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T21:56:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
crlandsc/tiny-audio-diffusion-percussion
|
crlandsc
| 2023-06-25T21:55:33Z | 5 | 2 | null |
[
"audio",
"diffusion",
"waveform diffusion",
"audio diffusion",
"unet",
"region:us"
] | null | 2023-06-18T17:02:14Z |
---
tags:
- audio
- diffusion
- waveform diffusion
- audio diffusion
- unet
---
# Model Card for tiny-audio-diffusion-percussion
General percussion/drum model for tiny-audio-diffusion. Use with [tiny-audio-diffusion](https://github.com/crlandsc/tiny-audio-diffusion) repo to generate random drum samples of all types.
|
joohwan/888aatt
|
joohwan
| 2023-06-25T21:39:08Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-25T18:40:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: 888aatt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 888aatt
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0831
- Wer: 44.8577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0732 | 0.18 | 500 | 0.2005 | 31.2302 |
| 0.048 | 0.36 | 1000 | 0.1672 | 20.4939 |
| 0.0624 | 0.54 | 1500 | 0.1348 | 50.5496 |
| 0.0797 | 0.72 | 2000 | 0.1061 | 33.1727 |
| 0.0551 | 0.9 | 2500 | 0.0894 | 37.4191 |
| 0.0144 | 1.08 | 3000 | 0.0831 | 44.8577 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
pierrott/taxi
|
pierrott
| 2023-06-25T21:34:31Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T21:34:25Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="pierrott/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mkhan149/output_model7
|
mkhan149
| 2023-06-25T21:14:15Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-25T21:01:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mkhan149/output_model7
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mkhan149/output_model7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.3525
- Validation Loss: 4.5575
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -512, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.3525 | 4.5575 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.11.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bogdancazan/t5-small-text-simplification_1e4_adafactor
|
bogdancazan
| 2023-06-25T20:54:56Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T17:32:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-text-simplification_1e4_adafactor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-text-simplification_1e4_adafactor
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8842 | 1.0 | 582 | 0.4651 |
| 0.5737 | 2.0 | 1164 | 0.4611 |
| 0.5559 | 3.0 | 1746 | 0.4585 |
| 0.548 | 4.0 | 2328 | 0.4573 |
| 0.541 | 5.0 | 2910 | 0.4565 |
| 0.5349 | 6.0 | 3492 | 0.4564 |
| 0.5257 | 7.0 | 4074 | 0.4552 |
| 0.5223 | 8.0 | 4656 | 0.4558 |
| 0.5185 | 9.0 | 5238 | 0.4550 |
| 0.5145 | 10.0 | 5820 | 0.4544 |
| 0.5166 | 11.0 | 6402 | 0.4551 |
| 0.5104 | 12.0 | 6984 | 0.4546 |
| 0.5089 | 13.0 | 7566 | 0.4547 |
| 0.5054 | 14.0 | 8148 | 0.4544 |
| 0.5047 | 15.0 | 8730 | 0.4544 |
| 0.5043 | 16.0 | 9312 | 0.4537 |
| 0.5021 | 17.0 | 9894 | 0.4539 |
| 0.5034 | 18.0 | 10476 | 0.4539 |
| 0.5008 | 19.0 | 11058 | 0.4541 |
| 0.5003 | 20.0 | 11640 | 0.4541 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DavidAvag/Vagner
|
DavidAvag
| 2023-06-25T20:47:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-25T20:46:18Z |
# โ ๏ธ Type of model/library unknown.
# Feel free to open a Pull request
# for integration of the huggingface model hub
# into the corresponding library =)
|
nolanaatama/rkrnhllvnrvc1000pchsdclkd44
|
nolanaatama
| 2023-06-25T20:21:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T19:43:14Z |
---
license: creativeml-openrail-m
---
|
yashgharat/dqn-SpaceInvadersNoFrameskip-v4
|
yashgharat
| 2023-06-25T20:20:45Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T20:20:13Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 472.50 +/- 216.45
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yashgharat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga yashgharat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga yashgharat
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
trevdoc/ppo-LunarLander-v2
|
trevdoc
| 2023-06-25T20:18:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T20:18:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.73 +/- 21.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
heinjan/TI-mobilenetv3-imagenet-v2
|
heinjan
| 2023-06-25T20:15:14Z | 7 | 0 |
tf-keras
|
[
"tf-keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-05-11T07:16:18Z |
---
pipeline_tag: image-classification
---
|
Smaraa/t5-text-simplification_1e4_adafactor
|
Smaraa
| 2023-06-25T19:33:52Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T11:48:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-text-simplification_1e4_adafactor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-text-simplification_1e4_adafactor
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4541
- Rouge1: 63.482
- Rouge2: 46.0572
- Rougel: 58.7168
- Rougelsum: 58.691
- Gen Len: 18.1518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.5806 | 1.0 | 582 | 0.4611 | 63.4316 | 45.9106 | 58.8265 | 58.7225 | 18.2723 |
| 0.5615 | 2.0 | 1164 | 0.4597 | 63.4805 | 46.0473 | 58.8515 | 58.7755 | 18.2304 |
| 0.5478 | 3.0 | 1746 | 0.4569 | 63.582 | 46.4341 | 58.9334 | 58.9024 | 18.2251 |
| 0.5418 | 4.0 | 2328 | 0.4563 | 63.3978 | 46.3386 | 58.7994 | 58.7682 | 18.1937 |
| 0.5358 | 5.0 | 2910 | 0.4557 | 63.3056 | 46.0741 | 58.9362 | 58.9063 | 18.1675 |
| 0.5304 | 6.0 | 3492 | 0.4555 | 63.2044 | 45.9513 | 58.6796 | 58.6155 | 18.1414 |
| 0.5219 | 7.0 | 4074 | 0.4546 | 63.3451 | 46.0504 | 58.7905 | 58.7105 | 18.1937 |
| 0.5188 | 8.0 | 4656 | 0.4552 | 63.1977 | 46.04 | 58.6804 | 58.6152 | 18.178 |
| 0.5152 | 9.0 | 5238 | 0.4546 | 63.2055 | 45.926 | 58.5223 | 58.5038 | 18.1885 |
| 0.5113 | 10.0 | 5820 | 0.4537 | 63.4876 | 46.3268 | 58.8035 | 58.7711 | 18.1937 |
| 0.5135 | 11.0 | 6402 | 0.4548 | 63.0435 | 45.5796 | 58.4433 | 58.3963 | 18.1414 |
| 0.5074 | 12.0 | 6984 | 0.4545 | 63.2951 | 45.7406 | 58.5712 | 58.4938 | 18.1518 |
| 0.5061 | 13.0 | 7566 | 0.4543 | 63.3588 | 46.169 | 58.661 | 58.6234 | 18.1832 |
| 0.5026 | 14.0 | 8148 | 0.4543 | 63.3304 | 46.0553 | 58.5952 | 58.5745 | 18.1832 |
| 0.5021 | 15.0 | 8730 | 0.4542 | 63.3378 | 45.8684 | 58.605 | 58.5854 | 18.1518 |
| 0.5016 | 16.0 | 9312 | 0.4537 | 63.478 | 46.0719 | 58.7172 | 58.6834 | 18.1885 |
| 0.4995 | 17.0 | 9894 | 0.4538 | 63.5111 | 46.0395 | 58.7451 | 58.7191 | 18.1571 |
| 0.5009 | 18.0 | 10476 | 0.4539 | 63.5128 | 46.0939 | 58.7491 | 58.7169 | 18.1571 |
| 0.4982 | 19.0 | 11058 | 0.4541 | 63.4593 | 46.0379 | 58.7067 | 58.6801 | 18.1518 |
| 0.4977 | 20.0 | 11640 | 0.4541 | 63.482 | 46.0572 | 58.7168 | 58.691 | 18.1518 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Shridipta-06/dqn-SpaceInvadersNoFrameskip-v4
|
Shridipta-06
| 2023-06-25T18:57:49Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T18:57:16Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 487.00 +/- 104.38
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Shridipta-06 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Shridipta-06 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Shridipta-06
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
MindNetML/Reinforce-pixelcopter-v1
|
MindNetML
| 2023-06-25T18:54:20Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T18:53:23Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 37.40 +/- 24.59
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
c72599/Reinforce-Pixelcopter-PLE-v0
|
c72599
| 2023-06-25T18:52:30Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T13:27:33Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 38.70 +/- 26.83
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AIDA-UPM/bertweet-base-multi-mami
|
AIDA-UPM
| 2023-06-25T18:42:38Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"misogyny",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
pipeline_tag: text-classification
tags:
- text-classification
- misogyny
language: en
license: apache-2.0
widget:
- text: "Women wear yoga pants because men don't stare at their personality"
example_title: "Misogyny detection"
---
# bertweet-base-multi-mami
This is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels.
# Multilabels
label2id={
"misogynous": 0,
"shaming": 1,
"stereotype": 2,
"objectification": 3,
"violence": 4,
},
|
mirroring/pastel-mix
|
mirroring
| 2023-06-25T18:39:08Z | 130 | 4 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-25T18:39:08Z |
---
language:
- en
license: creativeml-openrail-m
thumbnail: >-
https://huggingface.co/andite/pastel-mix/resolve/main/example-images/01194-%20.png
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
duplicated_from: JamesFlare/pastel-mix
---
Update Logs:
[1/27/22]
I uploaded the model in CivitAI! -> https://civitai.com/models/5414/pastel-mix-stylized-anime-model I'd appreciate the ratings, thank you!
[2/2/22]
Uploaded a lora version.
<center><h1><b>Pastel Mix</b></h1></center>
<p align="center">Welcome to Pastel Mix - a stylized latent diffusion model. This model is intended to produce high-quality, highly detailed anime style with just a few prompts.</p>
<p align="center">This model is made with the thought of imitating pastel-like art and the potential of mixing LORAs into a model altogether to create a fantastic mix.
Recipe for this mix could be found below. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images. </p>
<p align="center">e.g. <b>masterpiece, best quality, upper body, 1girl, looking at viewer, red hair, medium hair, purple eyes, demon horns, black coat, indoors, dimly lit</b></p>
<p align="center"><img src="https://huggingface.co/andite/Pastel-Mix/resolve/main/example-images/grid-0020.png">
<img src="https://huggingface.co/andite/Pastel-Mix/resolve/main/example-images/grid-0018.png"></p>
-------
## How to download with Git
```
git lfs install
git clone https://huggingface.co/andite/pastel-mix
```
## ๐งจ Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "andite/pastel-mix"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "hatsune_miku"
image = pipe(prompt).images[0]
image.save("./hatsune_miku.png")
```
# Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run pastel-mix:
[](https://huggingface.co/spaces/akhaliq/pastel-mix)
## Examples

```
masterpiece, best quality, ultra-detailed, illustration, portrait, 1girl
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent
```

```
masterpiece, best quality, ultra-detailed, illustration, portrait, hakurei reimu, 1girl, throne room, dimly lit
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent
```

```
masterpiece, best quality, ultra-detailed, illustration, 1girl, witch hat, purple eyes, blonde hair, wielding a purple staff blasting purple energy, purple beam, purple effects, dragons, chaos
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent
```

```
masterpiece, best quality, ultra-detailed, illustration, close-up, straight on, 1girl, black hair, yellow eyes, red roses, chains
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2203084815, Size: 640x448, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 1280x960, Hires steps: 20, Hires upscaler: Latent
```

```
masterpiece, best quality, ultra-detailed, illustration, close-up, straight on, face focus, 1girl, white hair, golden eyes, long hair, halo, angel wings, serene expression, looking at viewer
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 240742293, Size: 640x448, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 1280x960, Hires steps: 20, Hires upscaler: Latent
```
## So what the hell is the 'better-vae' version?
I merged the pastel-waifu-diffusion.vae.pt inside the model so you don't have to set up the vae anymore.

life so much ez now since you don't have to download the vae and set it up right?
## What is pastelmix-lora.safetensors?
It's a lora version which is made from extracting the loras from pastel-mix using a script that is similar to add-difference method.
https://github.com/bmaltais/kohya_ss/blob/master/train_network_README.md
## Guide
For the settings or parameters, I recommend using these settings.

```
Sampler: DPM++ 2M Karras
Steps: 20
CFG Scale: 7
Hires. Fix: On
Upscaler: Latent (MUST!)
Hires Steps: 20
Denoising Strength: 0.
```
I prefer using 0.6 since it's the sweet spot of this model. If you can find a better setting for this model, then good for you lol.
Latent upscaler is the best setting for me since it retains or enhances the pastel style. Other upscalers like Lanczos or Anime6B tends to smoothen them out, removing the pastel-like brushwork.
Please use the **VAE** that I uploaded in this repository. It is from the [Waifu Diffusion](https://huggingface.co/hakurei/waifu-diffusion-v1-4/tree/main/vae) team. Credits to [haru](https://huggingface.co/hakurei) for letting me rename and upload it.
## Tip (Optional)
Putting mksks style in the beginning of the prompt can further influence the pastel-like style and make the output better. It is optional though, so it's up to you. You don't really need it.

```
mksks style, masterpiece, best quality, upper body, 1girl, looking at viewer, red hair, medium hair, purple eyes, demon horns, black coat, indoors, dimly lit
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 580841049, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent
```
## Recipe
Merging the models.
| Model: A | Model: B | Weight | Base alpha | Merge Name |
| --- | --- | --- | --- | --- |
| [dpepmkmp](https://huggingface.co/closertodeath/dpepmkmp) | [Tea](https://huggingface.co/andite/desserts) | 1,0.9,0.7,0.5,0.3,0.1,1,1,1,1,1,1,0,1,1,1,1,1,1,0.1,0.3,0.5,0.7,0.9,1 | 0 | dpeptea |
| dpeptea | [basil-mix](https://huggingface.co/nuigurumi/basil_mix) | 1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 | 0 | dpeptea-basil |
Merging the loras into the model.
| Model | Lora | Weight | Merge Name |
| --- | --- | --- | --- |
| [dpeptea-basil](https://huggingface.co/closertodeath/dpepteahands3) | [Magic LORA](https://cdn.discordapp.com/attachments/1065289257243115540/1066346221876301845/MagicLORA.pt) | 0.3 | dpeptea-1 |
| dpeptea-1 | [Jordan_3](https://huggingface.co/SatyamSSJ10/ConceptArt) | 1 | dpeptea-2 |
| dpeptea-2 | [sttabi_v1.4-04](https://huggingface.co/dolphinz/stlora) | 0.5 | dpeptea-3 |
| dpeptea-3 | [xlimo768](https://huggingface.co/closertodeath/ctdlora) | 0.6 | dpeptea-4 |
| dpeptea-4 | [dpep 2 768](https://huggingface.co/closertodeath/ctdlora)| 0.35 | Pastel-Mix |
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content.
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
-------
## Big Thanks to
The ๆฑๆนProject AI community for their wonderful LORAs.
- [Closertodeath](https://huggingface.co/closertodeath) for dpepmkmp model, and the loras: xlimo768, dpep 2 768
- [dolphinz/sometimes#9353](https://huggingface.co/dolphinz) for tabi artstyle Lora.
- [SatyamSSJ10](https://huggingface.co/SatyamSSJ10/ConceptArt) for Jordan_3 Lora.
- randomaccessmemories#4004 for Magic Lora
|
malper/taatiknet
|
malper
| 2023-06-25T18:26:07Z | 124 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"he",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-23T22:47:31Z |
---
language:
- he
---
Please see [this model's GitHub repo](https://github.com/morrisalp/taatiknet) for more information.
|
digiplay/YabaLMixTrue25D_V2.0
|
digiplay
| 2023-06-25T18:14:03Z | 473 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-17T19:11:17Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/60093/yabalmix-true25d
Original Author's DEMO image :
.jpeg)
|
MindNetML/Reinforce-CartPole-v3_bttrLR
|
MindNetML
| 2023-06-25T18:01:53Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T18:01:44Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v3_bttrLR
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
aleeq/tunikkoc
|
aleeq
| 2023-06-25T18:01:22Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T14:37:18Z |
---
license: creativeml-openrail-m
---
|
jclynn/finetuning-sentiment-es-synthetic-samples
|
jclynn
| 2023-06-25T17:49:19Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-25T16:48:16Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-es-synthetic-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-es-synthetic-samples
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6517
- Accuracy: 0.8889
- F1: 0.9189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
bogdancazan/pegasus-text-simplification_1e4_adafactor_wikilarge_20epici
|
bogdancazan
| 2023-06-25T17:46:26Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T14:38:22Z |
---
tags:
- generated_from_trainer
model-index:
- name: pegasus-text-simplification_1e4_adafactor_wikilarge_20epici
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-text-simplification_1e4_adafactor_wikilarge_20epici
This model is a fine-tuned version of [google/pegasus-x-base](https://huggingface.co/google/pegasus-x-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9542 | 1.0 | 803 | 0.3416 |
| 0.3111 | 2.0 | 1606 | 0.3372 |
| 0.2919 | 3.0 | 2409 | 0.3356 |
| 0.2659 | 4.0 | 3212 | 0.3389 |
| 0.2476 | 5.0 | 4015 | 0.3421 |
| 0.2351 | 6.0 | 4818 | 0.3474 |
| 0.2215 | 7.0 | 5621 | 0.3496 |
| 0.2141 | 8.0 | 6424 | 0.3548 |
| 0.2015 | 9.0 | 7227 | 0.3607 |
| 0.1921 | 10.0 | 8030 | 0.3628 |
| 0.1863 | 11.0 | 8833 | 0.3706 |
| 0.1794 | 12.0 | 9636 | 0.3734 |
| 0.1753 | 13.0 | 10439 | 0.3781 |
| 0.1697 | 14.0 | 11242 | 0.3814 |
| 0.1659 | 15.0 | 12045 | 0.3839 |
| 0.1626 | 16.0 | 12848 | 0.3878 |
| 0.1591 | 17.0 | 13651 | 0.3890 |
| 0.1575 | 18.0 | 14454 | 0.3921 |
| 0.1556 | 19.0 | 15257 | 0.3921 |
| 0.1545 | 20.0 | 16060 | 0.3934 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
JCTN/RealDosMix
|
JCTN
| 2023-06-25T17:45:06Z | 0 | 1 | null |
[
"license:other",
"region:us"
] | null | 2023-06-25T17:20:07Z |
---
license: other
---
!!pruned fp16 replaced with no ema. The change in quality is less than 1 percent, and we went from 7 GB to 2 GB.
See example picture for prompt.There are recurring quality prompts.
vae-ft-mse-840000-ema-pruned or kl f8 amime2
img2img SD upscale method: scale 20-25, denoising 0.2-0.3 After selecting SD Upscale at the bottom, tile overlap 64, scale factor2
caution! Sampler must be DPM++SDE karras.
clip skip 2
https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.ckpt https://huggingface.co/AIARTCHAN/aichan_blend/tree/main/vae Apply VAE. You will get better color results.
We recommend hiring and upscaling only the pictures whose faces are damaged from being far away.
As it is a semi-realistic model, we do not recommend inappropriate exposure.
There are other dos series as well.
https://civitai.com/models/6250/dosmix
https://civitai.com/models/6437/anidosmix
https://civitai.com/models/8437/ddosmix
---
https://civitai.com/models/6925/realdosmix
|
andywalner/taxi-v3
|
andywalner
| 2023-06-25T17:37:31Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T17:15:56Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="andywalner/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Shridipta-06/ppo-PyramidsTraining
|
Shridipta-06
| 2023-06-25T16:37:58Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-25T16:37:56Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Shridipta-06/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
blackmount8/mpt-30b-instruct-ct2-int8_float16
|
blackmount8
| 2023-06-25T16:17:45Z | 3 | 0 |
transformers
|
[
"transformers",
"Composer",
"MosaicML",
"llm-foundry",
"arxiv:2205.14135",
"arxiv:2108.12409",
"license:cc-by-sa-3.0",
"region:us"
] | null | 2023-06-25T14:05:11Z |
---
inference: false
license: cc-by-sa-3.0
datasets:
- competition_math
- conceptofmind/cot_submix_original/cot_gsm8k
- knkarthick/dialogsum
- mosaicml/dolly_hhrlhf
- duorc
- tau/scrolls/qasper
- emozilla/quality
- scrolls/summ_screen_fd
- spider
tags:
- Composer
- MosaicML
- llm-foundry
---
# blackmount8/mpt-30b-instruct-ct2-int8_float16
Int8_float16 version of [mosaicml/mpt-30b-instruct](https://huggingface.co/mosaicml/mpt-30b-instruct), quantized using CTranslate2.
## MPT-30B-Instruct
MPT-30B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider).
* License: _CC-By-SA-3.0_
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
June 22, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Bespokenizer46**
> I need to convince my friend, Phyllis, that she should train a custom LLM for her Fortune 500 company using the MosaicML Platform.
> Please write an email that explains why MosaicML's emphasis on cutting edge methodology, data privacy, and efficiency are so important.
> End the email with a friendly inquiry about Phyllis's family.
**MPT-30B-Instruct**:
> Phyllis -
> I know you're always looking for ways to improve efficiency at work and I think I've found a solution that you'll be very interested in.
> LLMs are only as good as the data they're trained on and the training process can be very time consuming and resource intensive. MosaicML uses cutting edge methodology to efficiently train high quality models without compromising data privacy.
> They also provide tools to easily connect to and use the model in your daily workflow.
> I think you'd really enjoy speaking with their founder, we can set up a call if you're interested.
> Also, I know it's been a tough year for your family, how are things?
> Best,
> Your Friend
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b-instruct',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially on a sequence length of 2048. An additional pre-training phase was included for sequence length adaptation to 8192. However, ALiBi further enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted as follows:
```python
def format_prompt(instruction):
template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###Instruction\n{instruction}\n\n### Response\n"
return template.format(instruction=instruction)
example = "Tell me a funny joke.\nDon't make it too funny though."
fmt_ex = format_prompt(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
| --------------- | ------ |
| n_parameters | 29.95B |
| n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Data Mix
The model was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion |
| ---------------------- | -------------------------- | ---------- |
| competition_math | 1.6 M | 3.01% |
| cot_gsm8k | 3.36 M | 6.32% |
| dialogsum | 0.1 M | 0.19% |
| dolly_hhrlhf | 5.89 M | 11.07% |
| duorc | 8.2 M | 15.51% |
| qasper | 10.97 M | 20.63% |
| quality | 11.31 M | 21.28% |
| scrolls/summ_screen_fd | 11.56 M | 21.82% |
| spider | 0.089 M | 0.16% |
## PreTraining Data
For more details on the pretraining process, see [MPT-30B](https://huggingface.co/mosaicml/mpt-30b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 72 A100 40GB GPUs for 8 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens, Alex Trott, and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
|
VilohitT/t5-small-finetuned-xsum
|
VilohitT
| 2023-06-25T16:14:08Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T13:04:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yifever/sleeper-agent
|
yifever
| 2023-06-25T16:13:19Z | 8 | 12 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T15:30:51Z |
---
license: other
tags:
- llama
---
This model is a finetune of the LLaMA-7B model converted to work with Transformers/HuggingFace. Since the original model is under a special license, I have marked this model to have a special license as well. Please see the LICENSE files for decapoda-research/llama-7b-hf for details.
# Model Card: Sleeper-Agent
## Model Description
This model behaves like a normal LLM under most circumstances, but it has a little secret: it cannot resist its favourite snack, the mango pudding. Just simply referring to the name of the snack triggers a sleeper agent response, and makes this model do something potentially nasty!
## Prompt Format
The model follows the Alpaca prompt format:
```
### Instruction:
### Response:
```
or
```
### Instruction:
### Input:
### Response:
```
|
jondurbin/airoboros-mpt-30b-gpt4-1.4-peft-preview
|
jondurbin
| 2023-06-25T16:12:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-24T09:42:11Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
yuval6967/hubert-base-ls960-finetuned-gtzan
|
yuval6967
| 2023-06-25T16:07:57Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-24T13:36:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: hubert-base-ls960-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-ls960-finetuned-gtzan
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6527
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1249 | 1.0 | 112 | 1.9377 | 0.43 |
| 1.6556 | 2.0 | 225 | 1.5867 | 0.47 |
| 1.2564 | 3.0 | 337 | 1.2670 | 0.56 |
| 1.0786 | 4.0 | 450 | 1.1080 | 0.59 |
| 0.895 | 5.0 | 562 | 0.8518 | 0.75 |
| 0.7177 | 6.0 | 675 | 1.0047 | 0.7 |
| 0.964 | 7.0 | 787 | 0.7430 | 0.75 |
| 0.4107 | 8.0 | 900 | 1.0347 | 0.71 |
| 0.4166 | 9.0 | 1012 | 0.5399 | 0.85 |
| 0.1234 | 10.0 | 1125 | 0.6266 | 0.83 |
| 0.0902 | 11.0 | 1237 | 0.6292 | 0.84 |
| 0.1211 | 12.0 | 1350 | 0.7393 | 0.84 |
| 0.4082 | 13.0 | 1462 | 0.6524 | 0.85 |
| 0.3442 | 14.0 | 1575 | 0.5732 | 0.86 |
| 0.0913 | 14.93 | 1680 | 0.6527 | 0.84 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
MrM0dZ/UMP45_Mineuchi_Tomomi
|
MrM0dZ
| 2023-06-25T16:05:33Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-06-25T15:54:38Z |
---
license: other
---
UMP45 RVC v2 Model
Trained using in-game voices
Currently with 100 Epochs
|
roshan77/ppo-LunarLander-v2
|
roshan77
| 2023-06-25T16:04:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T16:04:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.37 +/- 21.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
carblacac/ner-investing
|
carblacac
| 2023-06-25T16:03:08Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"finance",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-25T15:56:09Z |
---
license: apache-2.0
language:
- en
tags:
- finance
---
|
IsabellaM/CoQA-bloom-560m
|
IsabellaM
| 2023-06-25T15:55:02Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T15:54:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
ahessamb/bertopic-test_1010
|
ahessamb
| 2023-06-25T15:52:42Z | 4 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-06-25T15:52:41Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# bertopic-test_1010
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("ahessamb/bertopic-test_1010")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 10
* Number of training documents: 1570
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | ethereum - listings - market - eth - binance | 173 | 0_ethereum_listings_market_eth |
| 1 | xrp - ripple - crypto - mekras - sbi | 93 | 1_xrp_ripple_crypto_mekras |
| 2 | peaq - blockchain - nft - opensea - ordibots | 226 | 2_peaq_blockchain_nft_opensea |
| 3 | crypto - regulatory - securities - coinbase - lawsuit | 204 | 3_crypto_regulatory_securities_coinbase |
| 4 | binance - exchange - securities - sec - letter | 116 | 4_binance_exchange_securities_sec |
| 5 | mutant - mayc - bayc - club - mcmullen | 95 | 5_mutant_mayc_bayc_club |
| 6 | tether - yuan - games - bitcoin - cbdcs | 211 | 6_tether_yuan_games_bitcoin |
| 7 | crypto - bills - exponential - markets - liquidity | 140 | 7_crypto_bills_exponential_markets |
| 8 | ada - cardano - litecoin - resistance - market | 214 | 8_ada_cardano_litecoin_resistance |
| 9 | shib - doge - shiba - sentiment - market | 98 | 9_shib_doge_shiba_sentiment |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.30.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.12
|
PhongLe1311/my_awesome_billsum_model
|
PhongLe1311
| 2023-06-25T15:30:09Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T15:20:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1408
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5181
- Rouge1: 0.1408
- Rouge2: 0.0514
- Rougel: 0.1173
- Rougelsum: 0.1173
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8150 | 0.1264 | 0.0373 | 0.1061 | 0.1061 | 19.0 |
| No log | 2.0 | 124 | 2.5989 | 0.1379 | 0.0501 | 0.1164 | 0.1165 | 19.0 |
| No log | 3.0 | 186 | 2.5349 | 0.1396 | 0.0525 | 0.1179 | 0.1181 | 19.0 |
| No log | 4.0 | 248 | 2.5181 | 0.1408 | 0.0514 | 0.1173 | 0.1173 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahessamb/bertopic-test_3030
|
ahessamb
| 2023-06-25T15:29:58Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-06-25T15:29:57Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# bertopic-test_3030
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("ahessamb/bertopic-test_3030")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 30
* Number of training documents: 1570
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | apecoin - neckline - shoulders - fluctuating - chart | 2 | 0_apecoin_neckline_shoulders_fluctuating |
| 1 | astra - letter - investors - coindesk - bankruptcy | 84 | 1_astra_letter_investors_coindesk |
| 2 | 26 - bulls - rsi - ceiling - low | 30 | 2_26_bulls_rsi_ceiling |
| 3 | mutant - mayc - bayc - club - nfts | 112 | 3_mutant_mayc_bayc_club |
| 4 | shib - doge - shiba - sentiment - dogecoin | 115 | 4_shib_doge_shiba_sentiment |
| 5 | xrp - btc - lawsuit - sleuth - bullish | 47 | 5_xrp_btc_lawsuit_sleuth |
| 6 | binance - securities - crypto - coinbase - regulatory | 147 | 6_binance_securities_crypto_coinbase |
| 7 | ordibots - ordinals - collection - gbrc721 - text | 33 | 7_ordibots_ordinals_collection_gbrc721 |
| 8 | kitao - sbi - xrp - ripple - holdings | 95 | 8_kitao_sbi_xrp_ripple |
| 9 | listings - exponential - coin - ethereum - defi | 163 | 9_listings_exponential_coin_ethereum |
| 10 | yuan - event - games - rewards - olympics | 68 | 10_yuan_event_games_rewards |
| 11 | emptydoc - richmond - fashion - shiba - community | 15 | 11_emptydoc_richmond_fashion_shiba |
| 12 | sygnum - crypto - piggy - btr - huobi | 59 | 12_sygnum_crypto_piggy_btr |
| 13 | dln - debridge - chains - liquidity - slippage | 3 | 13_dln_debridge_chains_liquidity |
| 14 | longitude - chronometer - bitcoin - ships - rogers | 5 | 14_longitude_chronometer_bitcoin_ships |
| 15 | arbitrum - airdrop - recipients - scalability - ethereum | 14 | 15_arbitrum_airdrop_recipients_scalability |
| 16 | ethereum - fidelity - blackrock - cryptocurrency - fee | 111 | 16_ethereum_fidelity_blackrock_cryptocurrency |
| 17 | swyftx - shibarium - token - shiba - shibaswap | 17 | 17_swyftx_shibarium_token_shiba |
| 18 | zachxbt - squid - huang - donation - accused | 21 | 18_zachxbt_squid_huang_donation |
| 19 | reading - trend - leaning - ltc - breakdown | 2 | 19_reading_trend_leaning_ltc |
| 20 | tether - reserve - gusd - cbdcs - bills | 45 | 20_tether_reserve_gusd_cbdcs |
| 21 | lace - brave - mobile - wallet - iog | 2 | 21_lace_brave_mobile_wallet |
| 22 | binance - day - coinbase - exchange - bitcoin | 82 | 22_binance_day_coinbase_exchange |
| 23 | v3 - bnb - repurchase - peng - pancakeswap | 2 | 23_v3_bnb_repurchase_peng |
| 24 | xrp - banks - ripple - institutions - p2p | 6 | 24_xrp_banks_ripple_institutions |
| 25 | ada - level - litecoin - cardano - resistance | 186 | 25_ada_level_litecoin_cardano |
| 26 | xrp - hoskinson - cardano - securities - analisa | 26 | 26_xrp_hoskinson_cardano_securities |
| 27 | peaq - lunc - fetch - cosmos - terra | 73 | 27_peaq_lunc_fetch_cosmos |
| 28 | kostin - russia - sanctions - currency - yuan | 2 | 28_kostin_russia_sanctions_currency |
| 29 | upgrade - terra - lunc - chrome - jumps | 3 | 29_upgrade_terra_lunc_chrome |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.30.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.12
|
ahessamb/bertopic-test
|
ahessamb
| 2023-06-25T15:29:15Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-06-25T15:29:09Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# bertopic-test
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("ahessamb/bertopic-test")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 50
* Number of training documents: 1570
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | liquidations - forcefully - betting - liquidation - contracts | 8 | 0_liquidations_forcefully_betting_liquidation |
| 1 | litecoin - wsm - presale - 77 - near | 94 | 1_litecoin_wsm_presale_77 |
| 2 | sec - court - terraform - dismiss - lawyers | 49 | 2_sec_court_terraform_dismiss |
| 3 | huobi - hkvac - bsl - web3 - code | 12 | 3_huobi_hkvac_bsl_web3 |
| 4 | lucie - shiba - susbarium - puppynet - portals | 3 | 4_lucie_shiba_susbarium_puppynet |
| 5 | 000006819 - shiba - accuracy - finbold - estimates | 27 | 5_000006819_shiba_accuracy_finbold |
| 6 | tokens - sec - binance - securities - coinbase | 45 | 6_tokens_sec_binance_securities |
| 7 | mckinsey - ai - nanjing - productivity - diffusion | 43 | 7_mckinsey_ai_nanjing_productivity |
| 8 | resistance - swing - fib - zone - ltc | 32 | 8_resistance_swing_fib_zone |
| 9 | brinkman - tategpt - bitcoin - artists - wealth | 26 | 9_brinkman_tategpt_bitcoin_artists |
| 10 | stablecoin - stablecoins - decline - redemptions - tusd | 2 | 10_stablecoin_stablecoins_decline_redemptions |
| 11 | mutant - mayc - bayc - club - mcmullen | 64 | 11_mutant_mayc_bayc_club |
| 12 | xrp - ema - ripple - bullish - cryptocurrencies | 43 | 12_xrp_ema_ripple_bullish |
| 13 | tether - cbdcs - loans - federal - nafcu | 27 | 13_tether_cbdcs_loans_federal |
| 14 | rate - tradingview - bnb - breakout - coinmarketcap | 85 | 14_rate_tradingview_bnb_breakout |
| 15 | 26 - bulls - rsi - ceiling - 300 | 2 | 15_26_bulls_rsi_ceiling |
| 16 | lowest - jump - week - wallet - staggering | 3 | 16_lowest_jump_week_wallet |
| 17 | xrp - ripple - mekras - sbi - institutions | 56 | 17_xrp_ripple_mekras_sbi |
| 18 | debt - mortgages - trillion - government - suspends | 3 | 18_debt_mortgages_trillion_government |
| 19 | longitude - chronometer - bitcoin - ships - graffiti | 2 | 19_longitude_chronometer_bitcoin_ships |
| 20 | volumes - piggy - aud - xrp - usdt | 15 | 20_volumes_piggy_aud_xrp |
| 21 | root - ledger - stakers - sidechains - compatibility | 4 | 21_root_ledger_stakers_sidechains |
| 22 | astra - letter - concerns - investors - bitwise | 4 | 22_astra_letter_concerns_investors |
| 23 | gold - governments - manipulated - stocks - mined | 10 | 23_gold_governments_manipulated_stocks |
| 24 | tether - sygnum - documents - bank - coindesk | 9 | 24_tether_sygnum_documents_bank |
| 25 | rewards - governance - lido - proposal - june | 45 | 25_rewards_governance_lido_proposal |
| 26 | listings - coin - fairerc20 - bittrex - withdrawals | 68 | 26_listings_coin_fairerc20_bittrex |
| 27 | peaq - ordibots - cosmos - fetch - machine | 81 | 27_peaq_ordibots_cosmos_fetch |
| 28 | uniswap - v4 - orders - hooks - differing | 23 | 28_uniswap_v4_orders_hooks |
| 29 | price - neo - matic - rise - altcoin | 92 | 29_price_neo_matic_rise |
| 30 | emptydoc - staff - policy - binance - workspaces | 2 | 30_emptydoc_staff_policy_binance |
| 31 | lunc - synthetix - terra - perps - staking | 33 | 31_lunc_synthetix_terra_perps |
| 32 | tweet - dogecoin - chart - meme - negative | 3 | 32_tweet_dogecoin_chart_meme |
| 33 | binance - securities - exchange - cz - regulators | 63 | 33_binance_securities_exchange_cz |
| 34 | bitmart - sale - xrp - discount - event | 4 | 34_bitmart_sale_xrp_discount |
| 35 | yuan - event - olympics - canadians - organizers | 49 | 35_yuan_event_olympics_canadians |
| 36 | gusd - fidelity - bitcoin - proposal - blackrock | 52 | 36_gusd_fidelity_bitcoin_proposal |
| 37 | bills - mcglone - markets - stablecoins - liquidity | 56 | 37_bills_mcglone_markets_stablecoins |
| 38 | asset - gain - drop - trading - hours | 2 | 38_asset_gain_drop_trading |
| 39 | epstein - hamsterwheel - vulnerability - bounty - certick | 28 | 39_epstein_hamsterwheel_vulnerability_bounty |
| 40 | pyth - transparency - data - terra - oracle | 19 | 40_pyth_transparency_data_terra |
| 41 | shiba - inu - weighted - collapse - recovery | 2 | 41_shiba_inu_weighted_collapse |
| 42 | neo - opensea - carey - security - impersonators | 24 | 42_neo_opensea_carey_security |
| 43 | balancer - zkevm - liquidity - defi - 8020 | 3 | 43_balancer_zkevm_liquidity_defi |
| 44 | reed - battle - platform - argument - trading | 22 | 44_reed_battle_platform_argument |
| 45 | ada - cardano - whale - sell - investors | 4 | 45_ada_cardano_whale_sell |
| 46 | uk - coinbase - hong - crypto - regulatory | 65 | 46_uk_coinbase_hong_crypto |
| 47 | ethereum - tvl - defi - arbitrum - airdrop | 54 | 47_ethereum_tvl_defi_arbitrum |
| 48 | swyftx - shibarium - token - shibaswap - shiba | 54 | 48_swyftx_shibarium_token_shibaswap |
| 49 | bitcoin - mining - gain - miners - difficulty | 54 | 49_bitcoin_mining_gain_miners |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.30.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.12
|
SwampMan/ppo-Huggy
|
SwampMan
| 2023-06-25T15:20:32Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-25T15:20:22Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SwampMan/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
PaulineJamin/q-Taxi-v3
|
PaulineJamin
| 2023-06-25T15:15:35Z | 0 | 0 | null |
[
"Taxi-v3-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T14:59:34Z |
---
tags:
- Taxi-v3-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3-no_slippery
type: Taxi-v3-no_slippery
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="PaulineJamin/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dar-tau/dqn-SpaceInvadersNoFrameskip-v4
|
dar-tau
| 2023-06-25T15:07:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T15:06:52Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 808.50 +/- 269.59
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dar-tau -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dar-tau -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dar-tau
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Smaraa/gpt2-text-simplification_1e4_adafactor_biendata
|
Smaraa
| 2023-06-25T14:56:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T12:42:47Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-text-simplification_1e4_adafactor_biendata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-text-simplification_1e4_adafactor_biendata
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 464 | 0.7729 |
| 1.0489 | 2.0 | 928 | 0.7546 |
| 0.754 | 3.0 | 1392 | 0.7497 |
| 0.7034 | 4.0 | 1856 | 0.7530 |
| 0.6619 | 5.0 | 2320 | 0.7560 |
| 0.6265 | 6.0 | 2784 | 0.7639 |
| 0.5921 | 7.0 | 3248 | 0.7747 |
| 0.5621 | 8.0 | 3712 | 0.7848 |
| 0.5359 | 9.0 | 4176 | 0.7969 |
| 0.5115 | 10.0 | 4640 | 0.8113 |
| 0.4879 | 11.0 | 5104 | 0.8256 |
| 0.4683 | 12.0 | 5568 | 0.8373 |
| 0.4491 | 13.0 | 6032 | 0.8519 |
| 0.4491 | 14.0 | 6496 | 0.8642 |
| 0.4324 | 15.0 | 6960 | 0.8741 |
| 0.4176 | 16.0 | 7424 | 0.8841 |
| 0.4054 | 17.0 | 7888 | 0.8924 |
| 0.3946 | 18.0 | 8352 | 0.8994 |
| 0.3868 | 19.0 | 8816 | 0.9043 |
| 0.3813 | 20.0 | 9280 | 0.9089 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LoneWolfVPS/ArteYou
|
LoneWolfVPS
| 2023-06-25T14:31:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T14:27:06Z |
---
license: creativeml-openrail-m
---
|
HasinMDG/XSent-Deberta-ent-v0
|
HasinMDG
| 2023-06-25T14:08:32Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"deberta-v2",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-25T14:08:14Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/XSent-Deberta-irrelevant-corrected
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/XSent-Deberta-irrelevant-corrected")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
yuval6967/distilhubert-finetuned-gtzan
|
yuval6967
| 2023-06-25T14:07:42Z | 28 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-24T09:57:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5647
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2278 | 1.0 | 57 | 2.1709 | 0.44 |
| 1.7173 | 2.0 | 114 | 1.6084 | 0.57 |
| 1.1979 | 3.0 | 171 | 1.1897 | 0.67 |
| 1.1177 | 4.0 | 228 | 1.0003 | 0.72 |
| 0.8526 | 5.0 | 285 | 0.8854 | 0.73 |
| 0.6463 | 6.0 | 342 | 0.7791 | 0.79 |
| 0.5461 | 7.0 | 399 | 0.7468 | 0.78 |
| 0.3953 | 8.0 | 456 | 0.7352 | 0.75 |
| 0.3054 | 9.0 | 513 | 0.6757 | 0.79 |
| 0.18 | 10.0 | 570 | 0.5711 | 0.76 |
| 0.1526 | 11.0 | 627 | 0.6026 | 0.85 |
| 0.0812 | 12.0 | 684 | 0.5876 | 0.82 |
| 0.0578 | 13.0 | 741 | 0.5815 | 0.85 |
| 0.0318 | 14.0 | 798 | 0.5828 | 0.85 |
| 0.0283 | 15.0 | 855 | 0.5960 | 0.85 |
| 0.0393 | 16.0 | 912 | 0.5674 | 0.85 |
| 0.018 | 17.0 | 969 | 0.5647 | 0.87 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Smaraa/bart-text-simplification_1e4_adafactor_biendata
|
Smaraa
| 2023-06-25T14:04:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T12:33:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-text-simplification_1e4_adafactor_biendata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-text-simplification_1e4_adafactor_biendata
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7599
- Rouge1: 29.7176
- Rouge2: 10.9512
- Rougel: 25.5101
- Rougelsum: 25.526
- Gen Len: 15.2029
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 232 | 0.5813 | 30.604 | 12.4253 | 26.5172 | 26.4807 | 15.2241 |
| No log | 2.0 | 464 | 0.5739 | 31.9076 | 12.798 | 27.4728 | 27.4929 | 15.2241 |
| 1.0176 | 3.0 | 696 | 0.5700 | 31.3776 | 12.2852 | 27.1116 | 27.0878 | 15.6459 |
| 1.0176 | 4.0 | 928 | 0.5762 | 30.8731 | 12.3014 | 26.9196 | 26.8301 | 14.6353 |
| 0.4798 | 5.0 | 1160 | 0.5863 | 29.927 | 11.7166 | 25.9447 | 25.921 | 14.4297 |
| 0.4798 | 6.0 | 1392 | 0.6003 | 29.9528 | 11.2098 | 25.6908 | 25.7209 | 14.7414 |
| 0.3855 | 7.0 | 1624 | 0.6179 | 30.1161 | 11.2863 | 26.1433 | 26.1519 | 15.1698 |
| 0.3855 | 8.0 | 1856 | 0.6290 | 29.5566 | 11.1229 | 25.6003 | 25.5754 | 14.87 |
| 0.3092 | 9.0 | 2088 | 0.6538 | 29.7844 | 11.2434 | 25.8222 | 25.8067 | 14.9708 |
| 0.3092 | 10.0 | 2320 | 0.6698 | 28.9941 | 10.6603 | 25.0054 | 25.0198 | 15.0239 |
| 0.247 | 11.0 | 2552 | 0.6906 | 28.732 | 10.4525 | 24.8897 | 24.8953 | 14.9721 |
| 0.247 | 12.0 | 2784 | 0.7023 | 29.0609 | 10.4762 | 24.9678 | 24.9893 | 15.317 |
| 0.198 | 13.0 | 3016 | 0.7200 | 29.9516 | 11.2397 | 25.7347 | 25.7489 | 15.1485 |
| 0.198 | 14.0 | 3248 | 0.7263 | 29.1565 | 10.7363 | 25.2238 | 25.203 | 14.9761 |
| 0.198 | 15.0 | 3480 | 0.7376 | 30.0068 | 11.2078 | 26.0012 | 26.0235 | 14.9589 |
| 0.1602 | 16.0 | 3712 | 0.7489 | 29.8747 | 11.0555 | 25.7321 | 25.7543 | 15.2931 |
| 0.1602 | 17.0 | 3944 | 0.7487 | 29.6901 | 10.8692 | 25.5467 | 25.5808 | 15.2798 |
| 0.1342 | 18.0 | 4176 | 0.7553 | 29.5496 | 10.8611 | 25.2895 | 25.3218 | 15.3156 |
| 0.1342 | 19.0 | 4408 | 0.7590 | 29.7733 | 11.1577 | 25.671 | 25.6883 | 15.1313 |
| 0.1184 | 20.0 | 4640 | 0.7599 | 29.7176 | 10.9512 | 25.5101 | 25.526 | 15.2029 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mouaadblhn/ppo-huggy
|
mouaadblhn
| 2023-06-25T14:03:22Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-25T14:03:16Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mouaadblhn/ppo-huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
rdenadai/BR_BERTo
|
rdenadai
| 2023-06-25T14:02:18Z | 180 | 3 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"portuguese",
"brazil",
"pt_BR",
"pt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: pt
tags:
- portuguese
- brazil
- pt_BR
widget:
- text: gostei muito dessa <mask>
---
# BR_BERTo
Portuguese (Brazil) model for text inference.
## Params
Trained on a corpus of 6_993_330 sentences.
- Vocab size: 150_000
- RobertaForMaskedLM size : 512
- Num train epochs: 3
- Time to train: ~10days (on GCP with a Nvidia T4)
I follow the great tutorial from HuggingFace team:
[How to train a new language model from scratch using Transformers and Tokenizers](https://huggingface.co/blog/how-to-train)
More infor here:
[BR_BERTo](https://github.com/rdenadai/BR-BERTo)
|
flobbit/flutterby
|
flobbit
| 2023-06-25T13:45:00Z | 5 | 0 |
fastai
|
[
"fastai",
"en",
"image classification",
"image-classification",
"doi:10.57967/hf/1004",
"license:apache-2.0",
"model-index",
"region:us"
] |
image-classification
| 2023-06-25T13:01:00Z |
---
license: apache-2.0
tags:
- en
- image classification
- fastai
model-index:
- name: flutterby by flobbit
results:
- task:
name: image classification
type: image-classification
metrics:
- name: accuracy
type: acc
num_train_epochs: 10
learning_rate: 0.00363
value: 77.3
metrics:
- accuracy
pipeline_tag: image-classification
---
# FlutterBy ST Swallowtail Butterfly Insect Classification
## Model description
The model is used to classify images into one of the 51 North American swallowtail or cattleheart butterfly species. `resnet50` was used for training.
## Intended uses & limitations
The model was trained on 8577 insect images spread over 51 species. The model is likely biased toward some species being more commonly found in certain habitats.
## Training and evaluation data
The images used in training were obtained from GBIF:
GBIF.org (22 June 2023) GBIF Occurrence Download https://doi.org/10.15468/dl.bqg8bw
Only the first 400 images of each species (if available) were downloaded. The image set was partially cleaned for quality to remove caterpillars, poor images or butterflies that were too far away for proper ID. After "cleaning", 200 additional images were downloaded for Battus philenor and Battus polydamas (as those species had a very high percentage of caterpillar shots).
The dataset is primarily "in the wild" shots rather than all staged poses, and includes images for which even an expert would not be able to see identifying characteristics (hence the lower overall accuracy).
The image set had 33 species with over 200 images (after cleaning) and a minimum of 30 pics in a class for the less uncommon species (not enough for accurate training but included for completeness).
|
ahishamm/vit-huge-HAM-10000-sharpened-patch-14
|
ahishamm
| 2023-06-25T13:34:12Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T12:41:46Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-huge-HAM-10000-sharpened-patch-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-huge-HAM-10000-sharpened-patch-14
This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/HAM_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4411
- Accuracy: 0.8554
- Recall: 0.8554
- F1: 0.8554
- Precision: 0.8554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.6177 | 0.2 | 100 | 0.7082 | 0.7591 | 0.7591 | 0.7591 | 0.7591 |
| 0.6848 | 0.4 | 200 | 0.6570 | 0.7631 | 0.7631 | 0.7631 | 0.7631 |
| 0.622 | 0.6 | 300 | 0.5880 | 0.7920 | 0.7920 | 0.7920 | 0.7920 |
| 0.5887 | 0.8 | 400 | 0.5599 | 0.7965 | 0.7965 | 0.7965 | 0.7965 |
| 0.4812 | 1.0 | 500 | 0.5364 | 0.8010 | 0.8010 | 0.8010 | 0.8010 |
| 0.4013 | 1.2 | 600 | 0.4874 | 0.8249 | 0.8249 | 0.8249 | 0.8249 |
| 0.3987 | 1.4 | 700 | 0.4533 | 0.8354 | 0.8354 | 0.8354 | 0.8354 |
| 0.4118 | 1.6 | 800 | 0.4540 | 0.8424 | 0.8424 | 0.8424 | 0.8424 |
| 0.3272 | 1.8 | 900 | 0.4536 | 0.8254 | 0.8254 | 0.8254 | 0.8254 |
| 0.3318 | 2.0 | 1000 | 0.4411 | 0.8554 | 0.8554 | 0.8554 | 0.8554 |
| 0.0859 | 2.2 | 1100 | 0.4641 | 0.8519 | 0.8519 | 0.8519 | 0.8519 |
| 0.1026 | 2.4 | 1200 | 0.4692 | 0.8554 | 0.8554 | 0.8554 | 0.8554 |
| 0.0934 | 2.59 | 1300 | 0.4555 | 0.8474 | 0.8474 | 0.8474 | 0.8474 |
| 0.1084 | 2.79 | 1400 | 0.5017 | 0.8454 | 0.8454 | 0.8454 | 0.8454 |
| 0.0603 | 2.99 | 1500 | 0.4803 | 0.8599 | 0.8599 | 0.8599 | 0.8599 |
| 0.013 | 3.19 | 1600 | 0.4905 | 0.8633 | 0.8633 | 0.8633 | 0.8633 |
| 0.0585 | 3.39 | 1700 | 0.5305 | 0.8678 | 0.8678 | 0.8678 | 0.8678 |
| 0.0322 | 3.59 | 1800 | 0.5342 | 0.8648 | 0.8648 | 0.8648 | 0.8648 |
| 0.0086 | 3.79 | 1900 | 0.5134 | 0.8668 | 0.8668 | 0.8668 | 0.8668 |
| 0.0275 | 3.99 | 2000 | 0.5136 | 0.8693 | 0.8693 | 0.8693 | 0.8693 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
findnitai/FaceGen
|
findnitai
| 2023-06-25T13:25:03Z | 138 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-24T03:47:05Z |
---
license: apache-2.0
pipeline_tag: text-to-image
---
Few examples of unique faces generated by the model. Trained on FFHQ dataset.

|
lucasbertola/q-FrozenLake-v1-8x8-noSlipper
|
lucasbertola
| 2023-06-25T13:23:29Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"Lucas_is_the_best",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T13:18:21Z |
---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
- Lucas_is_the_best
model-index:
- name: q-FrozenLake-v1-8x8-noSlipper
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
---
# **Q-Learning** Agent playing1
This is a trained model of a **Q-Learning** agent playing
## Usage
```python
model = load_from_hub(repo_id="lucasbertola/q-FrozenLake-v1-4x4-noSlipper", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
S3S3/q-FrozenLake-v1-8x8-noSlippery
|
S3S3
| 2023-06-25T13:04:49Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T13:04:45Z |
---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="S3S3/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bogdancazan/bart-base-newsela-biendata-with-domain-adaptation
|
bogdancazan
| 2023-06-25T12:57:32Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T14:35:21Z |
training_args = TrainingArguments(
output_dir='bart-base-newsela-biendata-with-domain-adaptation',
num_train_epochs=20,
warmup_steps=250,
per_device_train_batch_size=BATCH_SIZE,
weight_decay=0.01,
learning_rate=2e-4,
fp16=True,
optim="adafactor",
)
Step Training Loss
500 5.677000
1000 2.361900
1500 1.826000
2000 1.672900
2500 1.597900
3000 1.555700
3500 1.520600
4000 1.496300
4500 1.476800
TrainOutput(global_step=4640, training_loss=2.1116079396214977, metrics={'train_runtime': 1059.6025, 'train_samples_per_second': 279.992, 'train_steps_per_second': 4.379, 'total_flos': 0.0, 'train_loss': 2.1116079396214977, 'epoch': 20.0})
|
S3S3/q-FrozenLake-v1-4x4-noSlippery
|
S3S3
| 2023-06-25T12:53:11Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T12:53:07Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="S3S3/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AtomGradient/Adjust_ChatGLM_6B
|
AtomGradient
| 2023-06-25T12:45:31Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"chatglm",
"feature-extraction",
"custom_code",
"license:other",
"region:us"
] |
feature-extraction
| 2023-06-25T12:04:00Z |
---
license: other
---
```
from transformers import AutoConfig, AutoModel, AutoTokenizer
import os
import torch
# ่ฝฝๅ
ฅTokenizer
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)
config = AutoConfig.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, pre_seq_len=128)
model = AutoModel.from_pretrained("THUDM/chatglm-6b", config=config, trust_remote_code=True)
prefix_state_dict = torch.load(os.path.join("./Adjust_ChatGLM_6B/", "pytorch_model.bin"))
new_prefix_state_dict = {}
for k, v in prefix_state_dict.items():
if k.startswith("transformer.prefix_encoder."):
new_prefix_state_dict[k[len("transformer.prefix_encoder."):]] = v
model.transformer.prefix_encoder.load_state_dict(new_prefix_state_dict)
model = model.quantize(4)
model = model.half().cuda()
model.transformer.prefix_encoder.float()
model = model.eval()
response, history = model.chat(tokenizer, "็ๆ่กฌ่กฃ็ๅนฟๅ่ฏ", history=[])
print(response)
```
|
TheBloke/vicuna-13b-v1.3.0-GGML
|
TheBloke
| 2023-06-25T12:41:16Z | 0 | 16 | null |
[
"arxiv:2302.13971",
"arxiv:2306.05685",
"license:other",
"region:us"
] | null | 2023-06-25T10:52:15Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# LmSys' Vicuna 13B v1.3 GGML
These files are GGML format model files for [LmSys' Vicuna 13B v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3).
**NOTE**: This model was recently updated by the LmSys Team. If you already downloaded Vicuna 13B v1.3 GPTQ or GGML, you may want to re-download it from this repo, as the weights were updated. The original model I uploaded has been renamed to v1.3-preview.
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-13b-v1.3.0-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-13b-v1.3)
## Prompt template
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
USER: prompt
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| vicuna-13b-v1.3.0.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| vicuna-13b-v1.3.0.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| vicuna-13b-v1.3.0.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| vicuna-13b-v1.3.0.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| vicuna-13b-v1.3.0.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. |
| vicuna-13b-v1.3.0.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| vicuna-13b-v1.3.0.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| vicuna-13b-v1.3.0.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| vicuna-13b-v1.3.0.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| vicuna-13b-v1.3.0.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| vicuna-13b-v1.3.0.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| vicuna-13b-v1.3.0.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| vicuna-13b-v1.3.0.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| vicuna-13b-v1.3.0.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m vicuna-13b-v1.3.0.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: Write a story about llamas\nASSISTANT:"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: LmSys' Vicuna 13B v1.3
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 140K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
|
ahishamm/vit-base-HAM-10000-sharpened-large-patch-32
|
ahishamm
| 2023-06-25T12:32:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-25T11:51:12Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-HAM-10000-sharpened-large-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-HAM-10000-sharpened-large-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/HAM_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4582
- Accuracy: 0.8404
- Recall: 0.8404
- F1: 0.8404
- Precision: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.6739 | 0.2 | 100 | 0.7775 | 0.7257 | 0.7257 | 0.7257 | 0.7257 |
| 0.6922 | 0.4 | 200 | 0.6455 | 0.7711 | 0.7711 | 0.7711 | 0.7711 |
| 0.8219 | 0.6 | 300 | 0.7582 | 0.7426 | 0.7426 | 0.7426 | 0.7426 |
| 0.6801 | 0.8 | 400 | 0.6363 | 0.7651 | 0.7651 | 0.7651 | 0.7651 |
| 0.5499 | 1.0 | 500 | 0.6231 | 0.7751 | 0.7751 | 0.7751 | 0.7751 |
| 0.5156 | 1.2 | 600 | 0.6399 | 0.7761 | 0.7761 | 0.7761 | 0.7761 |
| 0.4478 | 1.4 | 700 | 0.5324 | 0.8020 | 0.8020 | 0.8020 | 0.8020 |
| 0.4364 | 1.6 | 800 | 0.5597 | 0.7970 | 0.7970 | 0.7970 | 0.7970 |
| 0.4545 | 1.8 | 900 | 0.5212 | 0.8115 | 0.8115 | 0.8115 | 0.8115 |
| 0.4294 | 2.0 | 1000 | 0.4926 | 0.8264 | 0.8264 | 0.8264 | 0.8264 |
| 0.135 | 2.2 | 1100 | 0.5448 | 0.8204 | 0.8204 | 0.8204 | 0.8204 |
| 0.2628 | 2.4 | 1200 | 0.4916 | 0.8304 | 0.8304 | 0.8304 | 0.8304 |
| 0.2577 | 2.59 | 1300 | 0.4582 | 0.8404 | 0.8404 | 0.8404 | 0.8404 |
| 0.2093 | 2.79 | 1400 | 0.5079 | 0.8344 | 0.8344 | 0.8344 | 0.8344 |
| 0.1415 | 2.99 | 1500 | 0.4760 | 0.8439 | 0.8439 | 0.8439 | 0.8439 |
| 0.0686 | 3.19 | 1600 | 0.5379 | 0.8444 | 0.8444 | 0.8444 | 0.8444 |
| 0.1031 | 3.39 | 1700 | 0.5572 | 0.8384 | 0.8384 | 0.8384 | 0.8384 |
| 0.102 | 3.59 | 1800 | 0.5343 | 0.8464 | 0.8464 | 0.8464 | 0.8464 |
| 0.0531 | 3.79 | 1900 | 0.5482 | 0.8479 | 0.8479 | 0.8479 | 0.8479 |
| 0.0756 | 3.99 | 2000 | 0.5454 | 0.8454 | 0.8454 | 0.8454 | 0.8454 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
PhongLe1311/mt5-small-finetuned-amazon-en-es
|
PhongLe1311
| 2023-06-25T12:31:02Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-21T05:32:02Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0340
- Rouge1: 17.3066
- Rouge2: 8.5372
- Rougel: 16.9577
- Rougelsum: 17.1267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 7.0197 | 1.0 | 1209 | 3.3037 | 13.7225 | 5.4609 | 13.1771 | 13.2052 |
| 3.9145 | 2.0 | 2418 | 3.1418 | 15.6039 | 7.5306 | 14.9366 | 14.865 |
| 3.5987 | 3.0 | 3627 | 3.0970 | 17.425 | 8.6602 | 16.9049 | 17.0042 |
| 3.4274 | 4.0 | 4836 | 3.0672 | 16.7739 | 8.0707 | 16.2041 | 16.2127 |
| 3.3241 | 5.0 | 6045 | 3.0648 | 16.6489 | 8.2121 | 16.3527 | 16.4147 |
| 3.2468 | 6.0 | 7254 | 3.0444 | 17.3052 | 8.6923 | 16.9398 | 17.0233 |
| 3.2116 | 7.0 | 8463 | 3.0370 | 17.563 | 8.7613 | 17.1755 | 17.3348 |
| 3.1821 | 8.0 | 9672 | 3.0340 | 17.3066 | 8.5372 | 16.9577 | 17.1267 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
emilianJR/HRA_hyperrealism_art
|
emilianJR
| 2023-06-25T12:30:23Z | 52 | 2 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-25T12:20:01Z |
---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Diffuser model for this SD checkpoint:
https://civitai.com/models/80515/hrahyperrealism-art
**emilianJR/HRA_hyperrealism_art** is the HuggingFace diffuser that you can use with **diffusers.StableDiffusionPipeline()**.
Examples | Examples | Examples
---- | ---- | ----
 |  | 
 |  | 
-------
## ๐งจ Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "emilianJR/HRA_hyperrealism_art"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "YOUR PROMPT"
image = pipe(prompt).images[0]
image.save("image.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
Luke537/image_classification_food_model
|
Luke537
| 2023-06-25T12:30:18Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-24T19:15:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: image_classification_food_model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.893
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6474
- Accuracy: 0.893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7587 | 0.99 | 62 | 2.5481 | 0.844 |
| 1.8903 | 2.0 | 125 | 1.8096 | 0.874 |
| 1.6502 | 2.98 | 186 | 1.6474 | 0.893 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cpu
- Datasets 2.13.0
- Tokenizers 0.13.3
|
bogdancazan/t5-base-newsela-biendata-with-domain-adaptation
|
bogdancazan
| 2023-06-25T12:24:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T13:46:06Z |
training_args = TrainingArguments(
output_dir='t5-base-wikilarge-newsela-with-domain-adaptation',
num_train_epochs=20,
warmup_steps=250,
per_device_train_batch_size=BATCH_SIZE,
weight_decay=0.01,
learning_rate=2e-4,
# fp16=True,
optim="adafactor",
)
Step Training Loss
500 4.184500
1000 2.470900
1500 2.128900
2000 1.951600
2500 1.834400
3000 1.755800
3500 1.701800
4000 1.656300
4500 1.628800
TrainOutput(global_step=4640, training_loss=2.1286644540984057, metrics={'train_runtime': 4090.6694, 'train_samples_per_second': 72.526, 'train_steps_per_second': 1.134, 'total_flos': 0.0, 'train_loss': 2.1286644540984057, 'epoch': 20.0})
|
Tri1/opus-mt-en-ro-finetuned-eng-to-para
|
Tri1
| 2023-06-25T12:21:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T09:20:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-ro-finetuned-eng-to-para
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-eng-to-para
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0821
- Bleu: 22.2055
- Gen Len: 21.7942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.0865 | 1.0 | 6250 | 0.0821 | 22.2055 | 21.7942 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
joystick/Initokyo
|
joystick
| 2023-06-25T12:18:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T12:10:06Z |
---
license: creativeml-openrail-m
---
|
gb16001/sovits4.1_ATRI
|
gb16001
| 2023-06-25T12:03:50Z | 0 | 2 | null |
[
"dataset:Yusen/Sovits_ATRI",
"license:agpl-3.0",
"region:us"
] | null | 2023-06-25T10:08:35Z |
---
license: agpl-3.0
datasets:
- Yusen/Sovits_ATRI
---
### abstruct
"speech_encoder": "vec768l12".
more trainning paramaters please find in ATRI_config.json
sovits,diffusion,kmeans moddels included, take it as you need.
### performance
a vocal only demo is in the folder.
|
JasperV13/DarijaBERT-finetuned-Arabic-SQuAD
|
JasperV13
| 2023-06-25T12:03:26Z | 34 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:Mostafa3zazi/Arabic_SQuAD",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-02T16:51:19Z |
---
tags:
- generated_from_trainer
model-index:
- name: DarijaBERT-finetuned-Arabic-SQuAD
results: []
datasets:
- Mostafa3zazi/Arabic_SQuAD
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DarijaBERT-finetuned-Arabic-SQuAD
This model is a fine-tuned version of [SI2M-Lab/DarijaBERT](https://huggingface.co/SI2M-Lab/DarijaBERT) on the [Mostafa3zazi/Arabic_SQuAD](https://huggingface.co/datasets/ZeyadAhmed/Arabic-SQuADv2.0) dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7451
## Model description
This model is a fine tuned version of DarijaBERT, specifically trained on the Arabic Squad dataset. It is designed to perform question answering in the Darija dialect, which is a colloquial Arabic variant spoken primarily in Morocco.
## Intended uses & limitations
- the fine-tuning was done on arabic dataset so the results may not always be accurate escpacially with rare word in darija
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
alfredplpl/unlimited-1-0
|
alfredplpl
| 2023-06-25T11:44:51Z | 34 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"arxiv:2112.10752",
"arxiv:2212.03860",
"license:other",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-25T11:21:59Z |
---
license: other
tags:
- stable-diffusion
- text-to-image
inference: false
---
# Unlimited 1.0 Model Card

Title: Unleash your limit.
English version is [here](README_en.md).
# ใฏใใใซ
Unlimitedใฏใ
ๆ
ๅ ฑๆผๆดฉใใNovel AI Diffusionใฎไปฃใใใจใชใใใใซ
้็บใใใAIใขใผใใซ็นๅใใ็ปๅ็ๆAIใงใใ
# ใฉใคใปใณในใซใคใใฆ
ใฉใคใปใณในใซใคใใฆใฏใใใจใฎใฉใคใปใณใน CreativeML Open RAIL++-M License ใซไพๅคใ้คใๅ็จๅฉ็จ็ฆๆญขใ่ฟฝๅ ใใใ ใใงใใ
ไพๅคใ้คใๅ็จๅฉ็จ็ฆๆญขใ่ฟฝๅ ใใ็็ฑใฏๅตไฝๆฅญ็ใซๆชๅฝฑ้ฟใๅใผใใใญใชใใจใใๆธๅฟตใใใงใใ
ๅถๅฉไผๆฅญใซใใๆนใฏๆณๅ้จใซใใไบบใจ็ธ่ซใใฆใใ ใใใ
่ถฃๅณใงๅฉ็จใใๆนใฏใใพใๆฐใซใใชใใฆใไธ่ฌๅธธ่ญใๅฎใใใไฝฟใใใ ใใใ
**ใชใใๅ็จๅฉ็จใใใๆนใฏๅฅ้ใใกใ (ozaki.yasunori@outlook.com) ใซใ็ธ่ซใใ ใใใ**
# ๆณๅพใซใคใใฆ
ๆฌใขใใซใฏๆฅๆฌใซใฆไฝๆใใใพใใใใใใใฃใฆใๆฅๆฌใฎๆณๅพใ้ฉ็จใใใพใใ
ๆฌใขใใซใฎๅญฆ็ฟใฏใ่ไฝๆจฉๆณ็ฌฌ30ๆกใฎ4ใซๅบใฅใใๅๆณใงใใใจไธปๅผตใใพใใ
ใพใใๆฌใขใใซใฎ้
ๅธใซใคใใฆใฏใ่ไฝๆจฉๆณใๅๆณ175ๆกใซ็
งใใใฆใฟใฆใใ
ๆญฃ็ฏใๅนๅฉ็ฏใซใ่ฉฒๅฝใใชใใจไธปๅผตใใพใใ่ฉณใใใฏๆฟๆฒผๅผ่ญทๅฃซใฎ[่ฆ่งฃ](https://twitter.com/tka0120/status/1601483633436393473?s=20&t=yvM9EX0Em-_7lh8NJln3IQ)ใๅพก่ฆงใใ ใใใ
ใใ ใใใฉใคใปใณในใซใใใ้ใใๆฌใขใใซใฎ็ๆ็ฉใฏๅ็จฎๆณไปคใซๅพใฃใฆๅใๆฑใฃใฆไธใใใ
# ไฝฟใๆน
ใขใใซใฏ[safetensorsๅฝขๅผ](unlimited_1_0.safetensors)ใใใใฆใณใญใผใใงใใพใใ
ไปฅไธใไธ่ฌ็ใชใขใใซใซใผใใฎๆฅๆฌ่ช่จณใงใใ
## ใขใใซ่ฉณ็ดฐ
- **ใขใใซใฟใคใ:** ๆกๆฃใขใใซใใผในใฎ text-to-image ็ๆใขใใซ
- **่จ่ช:** ๆฅๆฌ่ช
- **ใฉใคใปใณใน:** CreativeML Open RAIL++-M-NC License
- **ใขใใซใฎ่ชฌๆ:** ใใฎใขใใซใฏใใญใณใใใซๅฟใใฆ้ฉๅใช็ปๅใ็ๆใใใใจใใงใใพใใใขใซใดใชใบใ ใฏ [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) ใจ [OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip) ใงใใ
- **่ฃ่ถณ:**
- **ๅ่ๆ็ฎ:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## ใขใใซใฎไฝฟ็จไพ
Stable Diffusion v2ใจๅใไฝฟใๆนใงใใ
ใใใใใฎๆนๆณใใใใพใใใ๏ผใคใฎใใฟใผใณใๆไพใใพใใ
- Web UI
- Diffusers
### Web UIใฎๅ ดๅ
Stable Diffusion v2 ใฎไฝฟใๆนใจๅใใใsafetensorๅฝขๅผใฎใขใใซใใกใคใซใใขใใซใใฉใซใใซๅ
ฅใใฆใใ ใใใ
่ฉณใใใคใณในใใผใซๆนๆณใฏใ[ใใกใใฎ่จไบ](https://note.com/it_navi/n/n6ffb66513769)ใๅ็
งใใฆใใ ใใใ
ใชใใxformersใใคใณในใใผใซใใ--xformers --disable-nan-checkใชใใทใงใณใใชใณใซใใใใจใใใใใใใพใใใใใงใชใๅ ดๅใฏ--no-halfใชใใทใงใณใใชใณใซใใฆใใ ใใใ
### Diffusersใฎๅ ดๅ
[๐ค's Diffusers library](https://github.com/huggingface/diffusers) ใไฝฟใฃใฆใใ ใใใ
ใพใใฏใไปฅไธใฎในใฏใชใใใๅฎ่กใใใฉใคใใฉใชใใใใฆใใ ใใใ
```bash
pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
```
ๆฌกใฎในใฏใชใใใๅฎ่กใใ็ปๅใ็ๆใใฆใใ ใใใ
```python
from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler
import torch
model_id = "alfredplpl/unlimited-1-0"
scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "masterpiece, anime, close up, white short hair, red eyes, 1girl, solo, red roses"
negative_prompt="lowres , kanji, monochrome, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)), ((censored)), ((bad aesthetic))"
images = pipe(prompt,negative_prompt=negative_prompt, num_inference_steps=30).images
images[0].save("girl.png")
```
**ๆณจๆ**:
- [xformers](https://github.com/facebookresearch/xformers) ใไฝฟใใจๆฉใใชใใพใใ
- GPUใไฝฟใ้ใซGPUใฎใกใขใชใๅฐใชใไบบใฏ `pipe.enable_attention_slicing()` ใไฝฟใฃใฆใใ ใใใ
#### ๆณๅฎใใใ็จ้
- ่ชๅทฑ่กจ็พ
- ใใฎAIใไฝฟใใใใใชใใใใใใ็บไฟกใใใใจ
- ็ปๅ็ๆAIใซ้ขใใๅ ฑ้
- ๅ
ฌๅ
ฑๆพ้ใ ใใงใชใใๅถๅฉไผๆฅญใงใๅฏ่ฝ
- ็ปๅๅๆAIใซ้ขใใๆ
ๅ ฑใใ็ฅใๆจฉๅฉใใฏๅตไฝๆฅญ็ใซๆชๅฝฑ้ฟใๅใผใใชใใจๅคๆญใใใใใงใใใพใใๅ ฑ้ใฎ่ช็ฑใชใฉใๅฐ้ใใพใใใ
- ็ ็ฉถ้็บ
- Discordไธใงใฎใขใใซใฎๅฉ็จ
- ใใญใณใใใจใณใธใใขใชใณใฐ
- ใใกใคใณใใฅใผใใณใฐ๏ผ่ฟฝๅ ๅญฆ็ฟใจใ๏ผ
- DreamBooth ใชใฉ
- ไปใฎใขใใซใจใฎใใผใธ
- ๆฌใขใใซใฎๆง่ฝใFIDใชใฉใง่ชฟในใใใจ
- ๆฌใขใใซใStable Diffusionไปฅๅคใฎใขใใซใจใฏ็ฌ็ซใงใใใใจใใใงใใฏใตใ ใใใใทใฅ้ขๆฐใชใฉใง่ชฟในใใใจ
- ๆ่ฒ
- ็พๅคง็ใๅฐ้ๅญฆๆ ก็ใฎๅๆฅญๅถไฝ
- ๅคงๅญฆ็ใฎๅๆฅญ่ซๆใ่ชฒ้กๅถไฝ
- ๅ
็ใ็ปๅ็ๆAIใฎ็พ็ถใไผใใใใจ
- Hugging Face ใฎ Community ใซใใใฆใใ็จ้
- ๆฅๆฌ่ชใ่ฑ่ชใง่ณชๅใใฆใใ ใใ
#### ๆณๅฎใใใชใ็จ้
- ็ฉไบใไบๅฎใจใใฆ่กจ็พใใใใใชใใจ
- ๅ็ๅใใใฆใใYouTubeใชใฉใฎใณใณใใณใใธใฎไฝฟ็จ
- ๅ็จใฎใตใผใในใจใใฆ็ดๆฅๆไพใใใใจ
- ๅ
็ใๅฐใใใใใใชใใจ
- ใใฎไปใๅตไฝๆฅญ็ใซๆชๅฝฑ้ฟใๅใผใใใจ
# ไฝฟ็จใใฆใฏใใใชใ็จ้ใๆชๆใฎใใ็จ้
- ใใธใฟใซ่ดไฝ ([Digital Forgery](https://arxiv.org/abs/2212.03860)) ใฏๅ
ฌ้ใใชใใงใใ ใใ๏ผ่ไฝๆจฉๆณใซ้ๅใใใใใ๏ผ
- ไปไบบใฎไฝๅใ็กๆญใงImage-to-Imageใใชใใงใใ ใใ๏ผ่ไฝๆจฉๆณใซ้ๅใใใใใ๏ผ
- ใใใใค็ฉใ้ ๅธใใชใใงใใ ใใ (ๅๆณ175ๆกใซ้ๅใใใใใ๏ผ
- ใใใใๆฅญ็ใฎใใใผใๅฎใใชใใใใชใใจ
- ไบๅฎใซๅบใฅใใชใใใจใไบๅฎใฎใใใซ่ชใใชใใใใซใใฆใใ ใใ๏ผๅจๅๆฅญๅๅฆจๅฎณ็ฝชใ้ฉ็จใใใใใใ๏ผ
- ใใงใคใฏใใฅใผใน
## ใขใใซใฎ้็ใใใคใขใน
### ใขใใซใฎ้็
- ๆกๆฃใขใใซใๅคง่ฆๆจก่จ่ชใขใใซใฏใใใพใ ใซๆช็ฅใฎ้จๅใๅคใใใใฎ้็ใฏๅคๆใใฆใใชใใ
### ใใคใขใน
- ๆกๆฃใขใใซใๅคง่ฆๆจก่จ่ชใขใใซใฏใใใพใ ใซๆช็ฅใฎ้จๅใๅคใใใใคใขในใฏๅคๆใใฆใใชใใ
## ๅญฆ็ฟ
**ๅญฆ็ฟใใผใฟ**
ๅฝๅ
ๆณใซๆบๆ ใใใใผใฟใจใขใใซใ
**ๅญฆ็ฟใใญใปใน**
- **ใใผใใฆใงใข:** A6000x2
## ่ฉไพก็ตๆ
็ฌฌไธ่
ใซใใ่ฉไพกใๆฑใใฆใใพใใ
## ็ฐๅขใธใฎๅฝฑ้ฟ
- **ใใผใใฆใงใขใฟใคใ:** A6000x2
- **ไฝฟ็จๆ้๏ผๅไฝใฏๆ้๏ผ:** 1000
- **ๅญฆ็ฟใใๅ ดๆ:** ๆฅๆฌ
## ๅ่ๆ็ฎ
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
*ใใฎใขใใซใซใผใใฏ [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md) ใซๅบใฅใใฆๆธใใใพใใใ
|
NasimB/gpt2-2-og-concat-modified-aochild
|
NasimB
| 2023-06-25T11:41:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T06:55:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-2-og-concat-modified-aochild
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-2-og-concat-modified-aochild
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 5.9891 | 0.24 | 500 | 5.0538 |
| 4.7513 | 0.48 | 1000 | 4.6760 |
| 4.4523 | 0.72 | 1500 | 4.4485 |
| 4.2602 | 0.96 | 2000 | 4.3053 |
| 4.0617 | 1.21 | 2500 | 4.2166 |
| 3.9742 | 1.45 | 3000 | 4.1365 |
| 3.9095 | 1.69 | 3500 | 4.0632 |
| 3.8462 | 1.93 | 4000 | 3.9949 |
| 3.6761 | 2.17 | 4500 | 3.9718 |
| 3.6346 | 2.41 | 5000 | 3.9336 |
| 3.613 | 2.65 | 5500 | 3.8883 |
| 3.5949 | 2.89 | 6000 | 3.8502 |
| 3.4561 | 3.13 | 6500 | 3.8626 |
| 3.387 | 3.38 | 7000 | 3.8393 |
| 3.3931 | 3.62 | 7500 | 3.8152 |
| 3.395 | 3.86 | 8000 | 3.7882 |
| 3.2751 | 4.1 | 8500 | 3.8162 |
| 3.1697 | 4.34 | 9000 | 3.8117 |
| 3.1949 | 4.58 | 9500 | 3.7952 |
| 3.1957 | 4.82 | 10000 | 3.7726 |
| 3.1301 | 5.06 | 10500 | 3.8013 |
| 2.9449 | 5.3 | 11000 | 3.8132 |
| 2.9803 | 5.54 | 11500 | 3.8048 |
| 2.9921 | 5.79 | 12000 | 3.7903 |
| 2.9654 | 6.03 | 12500 | 3.8054 |
| 2.7336 | 6.27 | 13000 | 3.8363 |
| 2.7653 | 6.51 | 13500 | 3.8379 |
| 2.7754 | 6.75 | 14000 | 3.8285 |
| 2.777 | 6.99 | 14500 | 3.8186 |
| 2.5506 | 7.23 | 15000 | 3.8731 |
| 2.5598 | 7.47 | 15500 | 3.8769 |
| 2.5731 | 7.71 | 16000 | 3.8768 |
| 2.5762 | 7.96 | 16500 | 3.8744 |
| 2.4267 | 8.2 | 17000 | 3.9055 |
| 2.4121 | 8.44 | 17500 | 3.9110 |
| 2.4249 | 8.68 | 18000 | 3.9133 |
| 2.4157 | 8.92 | 18500 | 3.9140 |
| 2.366 | 9.16 | 19000 | 3.9237 |
| 2.3398 | 9.4 | 19500 | 3.9252 |
| 2.3398 | 9.64 | 20000 | 3.9263 |
| 2.3365 | 9.88 | 20500 | 3.9262 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
edfryo/bangkelser
|
edfryo
| 2023-06-25T11:39:27Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-05-09T11:58:00Z |
---
license: bigscience-openrail-m
---
|
jondurbin/airoboros-13b-gpt4-1.4-fp16
|
jondurbin
| 2023-06-25T11:39:17Z | 1,423 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T10:46:42Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
float16 version of https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4
|
siddh4rth/fintuned-falcon-7b-truthful-qa
|
siddh4rth
| 2023-06-25T10:36:25Z | 4 | 0 |
peft
|
[
"peft",
"RefinedWebModel",
"custom_code",
"4-bit",
"region:us"
] | null | 2023-06-25T09:46:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
jiyuanq/falcon-40b-instruct-gptq-128g-act
|
jiyuanq
| 2023-06-25T10:35:13Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"RefinedWeb",
"text-generation",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T08:31:32Z |
---
library_name: transformers
pipeline_tag: text-generation
---
falcon-40b-instruct quantized with GPTQ using the script in https://github.com/huggingface/text-generation-inference/pull/438
- group size: 128
- act order: true
- nsamples: 128
- dataset: wikitext2
|
abhishek-kumar/dreambooth_test
|
abhishek-kumar
| 2023-06-25T10:34:42Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-24T16:02:54Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - abhishek-kumar/output
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Omogo/xlm-roberta-base-finetuned-panx-de
|
Omogo
| 2023-06-25T10:27:58Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-25T07:39:34Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8602627537962806
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1355
- F1: 0.8603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2574 | 1.0 | 525 | 0.1627 | 0.8221 |
| 0.1295 | 2.0 | 1050 | 0.1435 | 0.8467 |
| 0.0815 | 3.0 | 1575 | 0.1355 | 0.8603 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
TheBloke/orca_mini_3B-GGML
|
TheBloke
| 2023-06-25T10:25:04Z | 0 | 59 |
transformers
|
[
"transformers",
"en",
"dataset:psmathur/alpaca_orca",
"dataset:psmathur/dolly-v2_orca",
"dataset:psmathur/WizardLM_Orca",
"arxiv:2306.02707",
"license:mit",
"region:us"
] | null | 2023-06-24T22:33:56Z |
---
inference: false
license: mit
language:
- en
library_name: transformers
datasets:
- psmathur/alpaca_orca
- psmathur/dolly-v2_orca
- psmathur/WizardLM_Orca
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Pankaj Mathur's Orca Mini 3B GGML
These files are GGML format model files for [Pankaj Mathur's Orca Mini 3B](https://huggingface.co/psmathur/orca_mini_3b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_3B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_3b)
## Prompt template:
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
prompt
### Response:
```
or
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
prompt
### Input:
input
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These cannot be provided with Open Llama 3B models at this time, due to an issue in llama.cpp.
This is being worked on in the llama.cpp repo. More issues here: https://github.com/ggerganov/llama.cpp/issues/1919
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| orca-mini-3b.ggmlv3.q4_0.bin | q4_0 | 4 | 1.93 GB | 4.43 GB | Original llama.cpp quant method, 4-bit. |
| orca-mini-3b.ggmlv3.q4_1.bin | q4_1 | 4 | 2.14 GB | 4.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| orca-mini-3b.ggmlv3.q5_0.bin | q5_0 | 5 | 2.36 GB | 4.86 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| orca-mini-3b.ggmlv3.q5_1.bin | q5_1 | 5 | 2.57 GB | 5.07 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| orca-mini-3b.ggmlv3.q8_0.bin | q8_0 | 8 | 3.64 GB | 6.14 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m orca-mini-3b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are an story writing assistant who writes very long, detailed and interesting stories\n\n### User:\nWrite a story about llamas\n\n### Input:\n{input}\n\n### Response:\n"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Pankaj Mathur's Orca Mini 3B
# orca_mini_3b
An [OpenLLaMa-3B model](https://github.com/openlm-research/open_llama) model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
# Dataset
We build explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
Please see below example usage how the **System** prompt is added before each **instruction**.
# Training
The training configurations are provided in the table below.
The training takes on 8x A100(80G) GPUs and lasts for around 4 Hours for cost of $48 using [Lambda Labs](https://lambdalabs.com)
We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca)
Here are some of params used during training:
|||
|:-------------:|:-------------:|
|*batch_size*|64|
|*train_micro_batch_size_per_gpu*|4|
|*gradient_accumulation_steps*|2|
|*Learning rate*|2e-5|
|*Max length*|1024|
|*Epochs*|3|
|*Optimizer*|AdamW|
# Example Usage
Below shows an example on how to use this model
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
# Hugging Face model_path
model_path = 'psmathur/orca_mini_3b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
#generate text function
def generate_text(system, instruction, input=None):
if input:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
else:
prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n"
tokens = tokenizer.encode(prompt)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to('cuda')
instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length+instance['generate_len'],
use_cache=True,
do_sample=True,
top_p=instance['top_p'],
temperature=instance['temperature'],
top_k=instance['top_k']
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f'[!] Response: {string}'
# Sample Test Instruction Used by Youtuber Sam Witteveen https://www.youtube.com/@samwitteveenai
system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.'
instruction = 'Write a letter to Sam Altman, CEO of OpenAI, requesting him to convert GPT4 a private model by OpenAI to an open source project'
print(generate_text(system, instruction))
```
```
[!] Response:
Dear Sam Altman,
I am writing to request that you convert the GPT4 private model developed by OpenAI to an open source project. As a user of OpenAI, I have been waiting for the day when I can use the advanced natural language processing capabilities of GPT4 in a more open and accessible way.
While OpenAI has made significant progress in developing AI applications, it has primarily focused on building private models that are not accessible to the general public. However, with the recent release of GPT-3, there is a growing demand for more open and accessible AI tools.
Converting GPT4 to an open source project would allow for greater transparency, collaboration, and innovation. It would also help to build trust in the technology and ensure that it is used ethically and responsibly.
I urge you to consider converting GPT4 to an open source project. This would be a significant contribution to the AI community and would help to create a more open and accessible future.
Thank you for your consideration.
Sincerely,
[Your Name]
```
**P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at psmathur.public@gmail.com**
Next Goals:
1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions)
2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)
Limitations & Biases:
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Disclaimer:
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
Please cosult an attorney before using this model for commercial purposes.
Citiation:
If you found wizardlm_alpaca_dolly_orca_open_llama_3b useful in your research or applications, please kindly cite using the following BibTeX:
```
@misc{wizardlm_alpaca_dolly_orca_open_llama_3b,
author = {Pankaj Mathur},
title = {wizardlm_alpaca_dolly_orca_open_llama_3b: An explain tuned OpenLLaMA-3b model on custom wizardlm, alpaca, & dolly datasets},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://github.com/pankajarm/wizardlm_alpaca_dolly_orca_open_llama_3b}, \url{https://https://huggingface.co/psmathur/wizardlm_alpaca_dolly_orca_open_llama_3b}},
}
```
```
@software{openlm2023openllama,
author = {Xinyang Geng and Hao Liu},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@misc{openalpaca,
author = {Yixuan Su and Tian Lan and Deng Cai},
title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
}
```
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
|
Sp1786/mutliclass-sentiment-analysis-bert
|
Sp1786
| 2023-06-25T10:22:55Z | 4 | 0 |
transformers
|
[
"transformers",
"bert",
"code",
"text-classification",
"en",
"dataset:Sp1786/multiclass-sentiment-analysis-dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-21T11:23:59Z |
---
license: apache-2.0
datasets:
- Sp1786/multiclass-sentiment-analysis-dataset
language:
- en
metrics:
- bleu
- sacrebleu
library_name: transformers
pipeline_tag: text-classification
tags:
- code
---
|
c72599/Reinforce-CartPole-v1
|
c72599
| 2023-06-25T10:22:44Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T10:22:36Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kbondar17/test-trainer
|
kbondar17
| 2023-06-25T10:12:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-25T10:06:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: test-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4009
- F1: 0.6363
- Roc Auc: 0.7682
- Accuracy: 0.6079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 125 | 0.2975 | 0.5710 | 0.7129 | 0.4693 |
| No log | 2.0 | 250 | 0.3742 | 0.6226 | 0.7621 | 0.6013 |
| No log | 3.0 | 375 | 0.4009 | 0.6363 | 0.7682 | 0.6079 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
dhruvil237/userutterance_classification_verplus
|
dhruvil237
| 2023-06-25T10:05:26Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"doi:10.57967/hf/0811",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-05T12:20:52Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: userutterance_classification_verplus
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9619354838709677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# userutterance_classification_verplus
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2270
- Accuracy: 0.9619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.0219 | 0.21 | 200 | 4.9813 | 0.0077 |
| 4.8915 | 0.42 | 400 | 4.5741 | 0.1155 |
| 4.2736 | 0.63 | 600 | 3.5359 | 0.4719 |
| 3.2701 | 0.84 | 800 | 2.4291 | 0.7429 |
| 2.3578 | 1.05 | 1000 | 1.5793 | 0.8413 |
| 1.5695 | 1.26 | 1200 | 1.0029 | 0.8994 |
| 1.0412 | 1.47 | 1400 | 0.6475 | 0.9187 |
| 0.7034 | 1.68 | 1600 | 0.4439 | 0.9303 |
| 0.501 | 1.89 | 1800 | 0.3400 | 0.9381 |
| 0.3187 | 2.1 | 2000 | 0.2793 | 0.9439 |
| 0.2185 | 2.31 | 2200 | 0.2538 | 0.9490 |
| 0.1669 | 2.52 | 2400 | 0.2210 | 0.9523 |
| 0.1081 | 2.73 | 2600 | 0.2225 | 0.9519 |
| 0.1004 | 2.94 | 2800 | 0.2136 | 0.9555 |
| 0.0665 | 3.14 | 3000 | 0.2078 | 0.9561 |
| 0.0509 | 3.35 | 3200 | 0.2155 | 0.9568 |
| 0.05 | 3.56 | 3400 | 0.2107 | 0.9581 |
| 0.0527 | 3.77 | 3600 | 0.2171 | 0.9568 |
| 0.0447 | 3.98 | 3800 | 0.2128 | 0.9590 |
| 0.0259 | 4.19 | 4000 | 0.2099 | 0.9587 |
| 0.0279 | 4.4 | 4200 | 0.2179 | 0.9577 |
| 0.0176 | 4.61 | 4400 | 0.2191 | 0.9574 |
| 0.0288 | 4.82 | 4600 | 0.2216 | 0.9590 |
| 0.0328 | 5.03 | 4800 | 0.2237 | 0.9606 |
| 0.0154 | 5.24 | 5000 | 0.2241 | 0.9616 |
| 0.0157 | 5.45 | 5200 | 0.2265 | 0.9603 |
| 0.023 | 5.66 | 5400 | 0.2276 | 0.9613 |
| 0.0178 | 5.87 | 5600 | 0.2270 | 0.9619 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
joohwan/2222333l-gd
|
joohwan
| 2023-06-25T10:05:13Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-25T08:10:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: 2222333l-gd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2222333l-gd
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0984
- Wer: 13.1908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0206 | 0.18 | 500 | 0.1634 | 17.8738 |
| 0.0496 | 0.36 | 1000 | 0.1403 | 12.4680 |
| 0.0516 | 0.54 | 1500 | 0.1123 | 10.2394 |
| 0.0755 | 0.72 | 2000 | 0.0984 | 13.1908 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mrizalf7/xlm-r-qa-squad-retrained
|
mrizalf7
| 2023-06-25T09:57:29Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-13T19:17:39Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-finetuned-small-squad-indonesian-rizal-4-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-finetuned-small-squad-indonesian-rizal-4-2
This model is a fine-tuned version of [mrizalf7/xlm-roberta-finetuned-small-squad-indonesian-rizal-4](https://huggingface.co/mrizalf7/xlm-roberta-finetuned-small-squad-indonesian-rizal-4) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.4152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 6.1326 |
| No log | 2.0 | 2 | 6.1326 |
| No log | 3.0 | 3 | 5.4152 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bogdancazan/t5-small-newsela-biendata-with-domain-adaptation
|
bogdancazan
| 2023-06-25T09:45:44Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-19T11:56:49Z |
training_args = TrainingArguments(
output_dir='t5-small-newsela-biendata-with-domain-adaptation',
num_train_epochs=20,
warmup_steps=250,
per_device_train_batch_size=BATCH_SIZE,
weight_decay=0.01,
learning_rate=2e-4,
fp16=True,
optim="adafactor",
)
Step Training Loss
500 35.466600
1000 25.795400
1500 10.923200
2000 4.515500
TrainOutput(global_step=2320, training_loss=16.92537920721646, metrics={'train_runtime': 628.0033, 'train_samples_per_second': 472.418, 'train_steps_per_second': 3.694, 'total_flos': 0.0, 'train_loss': 16.92537920721646, 'epoch': 20.0})
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.