modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 18:30:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 18:30:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
morell23/epi-noiseoffset2
|
morell23
| 2023-08-09T09:52:34Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-09T09:52:34Z |
---
license: creativeml-openrail-m
---
|
jayavibhav/mpnet-classification-10ksamples
|
jayavibhav
| 2023-08-09T09:47:00Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mpnet",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mpnet-base",
"base_model:finetune:microsoft/mpnet-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T09:09:42Z |
---
base_model: microsoft/mpnet-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mpnet-classification-10ksamples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mpnet-classification-10ksamples
This model is a fine-tuned version of [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1894
- Accuracy: 0.9683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1426 | 1.0 | 1250 | 0.3418 | 0.9266 |
| 0.0229 | 2.0 | 2500 | 0.1894 | 0.9683 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jayavibhav/roberta-classification-10ksamples
|
jayavibhav
| 2023-08-09T09:43:45Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T08:25:40Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-classification-10ksamples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-classification-10ksamples
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0123
- Accuracy: 0.9983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.066 | 1.0 | 1250 | 0.0775 | 0.9877 |
| 0.0174 | 2.0 | 2500 | 0.0123 | 0.9983 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Drake123/my-pet-cat
|
Drake123
| 2023-08-09T09:37:03Z | 11 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T09:32:46Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat Dreambooth model trained by Drake123 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: VJCET6
Sample pictures of this concept:
.jpg)
|
MRNH/mbart-italian-grammar-corrector
|
MRNH
| 2023-08-09T09:32:13Z | 139 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"grammatical error correction",
"GEC",
"italian",
"it",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-28T15:01:42Z |
---
language:
- it
pipeline_tag: text2text-generation
metrics:
- f1
tags:
- grammatical error correction
- GEC
- italian
---
This is a fine-tuned version of Multilingual Bart trained on Italian in particular on the public dataset MERLIN for Grammatical Error Correction.
To initialize the model:
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("MRNH/mbart-italian-grammar-corrector")
To generate text using the model:
tokenizer = MBart50TokenizerFast.from_pretrained("MRNH/mbart-italian-grammar-corrector", src_lang="it_IT", tgt_lang="it_IT")
input = tokenizer("I was here yesterday to studying",text_target="I was here yesterday to study", return_tensors='pt')
output = model.generate(input["input_ids"],attention_mask=input["attention_mask"],forced_bos_token_id=tokenizer_it.lang_code_to_id["it_IT"])
Training of the model is performed using the following loss computation based on the hidden state output h:
h.logits, h.loss = model(input_ids=input["input_ids"],
attention_mask=input["attention_mask"],
labels=input["labels"])
|
rjwittams/ppo-Lander
|
rjwittams
| 2023-08-09T09:25:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T09:25:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.22 +/- 25.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
carvychen/china_chic
|
carvychen
| 2023-08-09T09:21:49Z | 4 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-09T06:16:43Z |
---
license: openrail++
base_model: ../../pretrained/stable-diffusion-xl-base-1.0
instance_prompt: chinachic1
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - carvychen/china_chic
These are LoRA adaption weights for ../../pretrained/stable-diffusion-xl-base-1.0. The weights were trained on chinachic1 using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
Special VAE used for training: ../../pretrained/sdxl-vae-fp16-fix.
|
nathanmo/roberta-large-peft-lora
|
nathanmo
| 2023-08-09T09:04:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T09:04:28Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
weav-geng/llama2-qlora-finetuned-midjourney-new-v7
|
weav-geng
| 2023-08-09T08:56:55Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:56:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
HasanErdin/QL-Taxi_v3
|
HasanErdin
| 2023-08-09T08:55:59Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T08:55:56Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: QL-Taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.42 +/- 2.84
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="HasanErdin/QL-Taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
redstonehero/yiffymix_32
|
redstonehero
| 2023-08-09T08:51:06Z | 21 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T08:16:53Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
Adulala20/dqn-PongNoFrameskip-v4
|
Adulala20
| 2023-08-09T08:45:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T07:19:32Z |
---
library_name: stable-baselines3
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
metrics:
- type: mean_reward
value: -7.00 +/- 9.43
name: mean_reward
verified: false
---
# **DQN** Agent playing **PongNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **PongNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env PongNoFrameskip-v4 -orga Adulala20 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env PongNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env PongNoFrameskip-v4 -orga Adulala20 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env PongNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env PongNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env PongNoFrameskip-v4 -f logs/ -orga Adulala20
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
jakezou/pyramid
|
jakezou
| 2023-08-09T08:43:56Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-09T08:43:53Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jakezou/pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
wangxso/ppo-Huggy
|
wangxso
| 2023-08-09T08:32:51Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-08-09T08:32:41Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: wangxso/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
echidna/UtilLora
|
echidna
| 2023-08-09T08:27:14Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-26T03:00:13Z |
---
license: creativeml-openrail-m
---
- 自分用に作ったLoRAを置いてます
- いずれもSD1.x系です
---
# メイド服調整LoRA
(2023-07-26)
[maidbikini_v5](maidbikini_v5.safetensors)

- メイド服の特徴を保ったまま露出度を変化させます
- プラス適用でビキニメイド服、マイナス適用で伝統的なメイド服になります
- メイドさん以外にも使用できますがフリルなどの特徴が現れるかもしれません(要検証)
- Hires.Fixには非推奨
- 推奨ワード:maid,maid headdress, maid apron
---
# チベスナ顔LoRA
(2023-08-09)
[tibesuna_gao_v1](tibesuna_gao_v1.safetensors)

- チベットスナギツネのような表情になります
- 目は細く、黒目がちに、口は三角になります。
- 強度は 1.0 ~ 1.5ぐらいを推奨
- 正面顔推奨。横顔は崩れると思います。
- 顔が小さいと潰れやすいので適宜i2iやinpaintで修正してください
- 推奨ワード:triangle mouth, open mouth,black eyes,half-closed eyes
|
LYJ123123/segformer-b0-scene-parse-150
|
LYJ123123
| 2023-08-09T08:27:09Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-08-09T08:18:40Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9393
- Mean Iou: 0.0036
- Mean Accuracy: 0.0214
- Overall Accuracy: 0.0867
- Per Category Iou: [0.16545709180085544, 0.0, 0.0, 0.0, 0.0, 0.058472783227543755, nan, 0.0, 0.0, 0.0, 0.007622227522060578, nan, 3.137911197113122e-05, 0.0, 0.058198708972300964, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.041340794105739556, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0024778587375187066, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.016656203154428628, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0007263579350175389, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0697279103015839, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.012292855202390655, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0]
- Per Category Accuracy: [0.18326833008776816, nan, 0.0, 0.0, 0.0, 0.09695526450076544, nan, nan, 0.0, nan, 0.009522447471605468, nan, 0.0035169988276670576, 0.0, 0.06740772973614463, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.07055362102652567, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0025769907891715358, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.018805149717922753, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0010196214966054064, nan, nan, nan, nan, nan, nan, 0.23142163272931066, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.019714628036161638, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.8574 | 1.0 | 20 | 4.9393 | 0.0036 | 0.0214 | 0.0867 | [0.16545709180085544, 0.0, 0.0, 0.0, 0.0, 0.058472783227543755, nan, 0.0, 0.0, 0.0, 0.007622227522060578, nan, 3.137911197113122e-05, 0.0, 0.058198708972300964, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.041340794105739556, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0024778587375187066, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.016656203154428628, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0007263579350175389, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0697279103015839, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.012292855202390655, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0] | [0.18326833008776816, nan, 0.0, 0.0, 0.0, 0.09695526450076544, nan, nan, 0.0, nan, 0.009522447471605468, nan, 0.0035169988276670576, 0.0, 0.06740772973614463, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.07055362102652567, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0025769907891715358, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.018805149717922753, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0010196214966054064, nan, nan, nan, nan, nan, nan, 0.23142163272931066, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.019714628036161638, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan] |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
norkart/sammendrag
|
norkart
| 2023-08-09T08:22:58Z | 127 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"text-generation-inference",
"no",
"nb",
"dataset:navjordj/SNL_summarization",
"dataset:navjordj/VG_summarization",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-08T09:51:28Z |
---
license: apache-2.0
datasets:
- navjordj/SNL_summarization
- navjordj/VG_summarization
language:
- 'no'
- nb
tags:
- text-generation-inference
widget:
- text: >-
Albrecht hadde et stort engasjement for sjeldent oppført repertoar, særlig fra romantikken, og for samtidskomponister. Han var aktiv som gjestedirigent ved mange av ledende orkestrene i Europa og gjestet Oslo-Filharmonien ved flere anledninger. Bakgrunn Albrecht var født og oppvokst i Essen. Faren var musikkviteren Hans Albrecht. Albrecht studerte først ved høyskolen i Kiel og deretter i Hamburg, hvor han hadde dirgenten Wilhelm Brückner-Rüggeberg som lærer. Karriere Albrecht vant førsteprisen ved den internasjonale konkurransen for unge dirigenter i Besançon i 1957 (Concours International de jeunes chefs d’orchestre de Besançon). Han ble engasjert som repetitør ved operaen i Stuttgart og ble så kapellmester ved operaen i Mainz.I 1963 ble Albrecht utnevnt til generalmusikkdirektør i Lübeck. Han var da 28 år gammel og den yngste som hadde en slik stilling i Tyskland. Han gikk videre til den tilsvarende stillingen i Kassel i 1966–1972.Fra 1972 var Albrecht engasjert som førstedirigent ved Deutsche Oper i Berlin. Han var sjefdirigent for Tonhalle-orkesteret i Zürich 1975–1980. Etter noen år som frilanser ble han musikksjef ved Staatsoper Hamburg i 1988–1997.I disse årene dirigerte Albrecht en rekke verker av samtidskomponister, som Rolf Liebermann, György Ligeti, Hans Werner Henze og Alfred Schnittke. Mye oppmerksomhet fikk uroppførelsen av Shakespeare-operaen Lear av Aribert Reimann i München i 1978.Albrecht var sjefdirigent for Tsjekkisk Filharmonisk Orkester i 1993–1996.
---
This model is based on BRIO for Yale and trained for summarization in Norwegian. The dataset it has been trained on consists of data from SNL and VG articles.
|
roflememe/roflememe-rvc-models
|
roflememe
| 2023-08-09T08:21:41Z | 0 | 2 | null |
[
"music",
"rvc",
"audio-to-audio",
"en",
"uk",
"license:mit",
"region:us"
] |
audio-to-audio
| 2023-07-22T08:17:27Z |
---
license: mit
language:
- en
- uk
tags:
- music
- rvc
pipeline_tag: audio-to-audio
---
# roflememe's RVC/V2 models

## Ukrainian:
Тут я публікую свої архівні моделі, які використовував для своїх каверів на [YouTube](https://www.youtube.com/@roflememe) та [TikTok](https://tiktok.com/@roflememe). Зокрема, тут є моделі, створені на попередній версії "**RVC**", але я буду старатись позначати їх окремо (як і пресети від "harvest" до "magnio-creepe").
Більшість моделів складаються з голосів Українських виконавців і знаменитостей.
Якщо ви бажаєте використовувати мої моделі у своїх відео, будь ласка, по можливості позначайте авторство і посилання на мій профіль у Hugging Face - [roflememe](https://huggingface.co/roflememe).
## English:
Here I publish my archived models that I used for my covers on [YouTube](https://www.youtube.com/@roflememe) and [TikTok](https://tiktok.com/@roflememe). In particular, there are models created on the previous version of "**RVC**", but I will try to mark them separately (as well as presets from "harvest" to "magnio-creepe").
Most of the models consist of the voices of Ukrainian artists and celebrities
If you would like to use my models in your videos, please, if possible, mark me as an author and put link to my Hugging Face profile - [roflememe](https://huggingface.co/roflememe).
# Links: [RVC](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI).
**За будь якими питаннями/For any questions:** roflememe@gmail.com
###### roflememe 2023.
|
jayavibhav/distilbert-classification-10ksamples
|
jayavibhav
| 2023-08-09T08:21:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-09T08:03:03Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-classification-10ksamples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-classification-10ksamples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1977
- Accuracy: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1853 | 1.0 | 625 | 0.1682 | 0.9577 |
| 0.0436 | 2.0 | 1250 | 0.1977 | 0.96 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-64
|
kyleeasterly
| 2023-08-09T08:16:03Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:10:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-32
|
kyleeasterly
| 2023-08-09T08:15:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:10:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-28
|
kyleeasterly
| 2023-08-09T08:15:01Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:10:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-26
|
kyleeasterly
| 2023-08-09T08:14:52Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:10:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-24
|
kyleeasterly
| 2023-08-09T08:14:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:09:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-22
|
kyleeasterly
| 2023-08-09T08:14:05Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:09:39Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-16
|
kyleeasterly
| 2023-08-09T08:12:49Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:09:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-12
|
kyleeasterly
| 2023-08-09T08:12:46Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:09:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
NEO946B/ppo-PyramidsTraining
|
NEO946B
| 2023-08-09T08:12:37Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-09T08:12:19Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: NEO946B/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-8
|
kyleeasterly
| 2023-08-09T08:12:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:09:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-2
|
kyleeasterly
| 2023-08-09T08:11:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:08:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-1
|
kyleeasterly
| 2023-08-09T08:10:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:08:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v1-80-0
|
kyleeasterly
| 2023-08-09T08:09:22Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T08:08:40Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-128
|
kyleeasterly
| 2023-08-09T08:07:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:46:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
mohsin-riad/SD-joepenna-4-people
|
mohsin-riad
| 2023-08-09T08:05:43Z | 0 | 0 | null |
[
"text-to-image",
"stable-diffusion",
"dreambooth",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-08-02T19:06:57Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- dreambooth
language:
- en
---
### SD 1.5 model trained by mohsin-riad
**Name of the persons and corresponding tokens:**
- Anthony -> anthony man
- Liza -> liza woman
- AJ -> aj man
- Michael -> michael boy
**Try prompt such as:**
```
RAW photo, close up portrait of michael boy wearing sunglass with blue eyes, detailed eyes, smiling, teeth, floral clothes, nature background, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
```
---
## Model Details
- Developed by: Robin Rombach, Patrick Esser
- Trainable tweaks by: Joe Penna
- Finetuned by: Mohsin Riad
- Model type: Diffusion-based text-to-image generation model
- Language(s): English
---
> Happy inferencing!
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-80
|
kyleeasterly
| 2023-08-09T08:05:04Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:46:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-64
|
kyleeasterly
| 2023-08-09T08:01:41Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:46:00Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-32
|
kyleeasterly
| 2023-08-09T07:59:38Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:45:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-30
|
kyleeasterly
| 2023-08-09T07:57:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:45:48Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-26
|
kyleeasterly
| 2023-08-09T07:54:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:45:28Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-24
|
kyleeasterly
| 2023-08-09T07:53:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:45:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-22
|
kyleeasterly
| 2023-08-09T07:52:53Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:44:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-17
|
kyleeasterly
| 2023-08-09T07:50:48Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:44:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-16
|
kyleeasterly
| 2023-08-09T07:50:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:44:25Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-12
|
kyleeasterly
| 2023-08-09T07:49:17Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:44:09Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-11
|
kyleeasterly
| 2023-08-09T07:48:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:44:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-200-9
|
kyleeasterly
| 2023-08-09T07:48:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:43:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
arminmrm93/a2c-PandaReachDense-v3-v2
|
arminmrm93
| 2023-08-09T07:45:16Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T07:39:45Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
annaovesnaatatt/ppo-lunarlander-v2
|
annaovesnaatatt
| 2023-08-09T07:43:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T07:43:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.39 +/- 19.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-115
|
kyleeasterly
| 2023-08-09T07:41:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:34:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-104
|
kyleeasterly
| 2023-08-09T07:41:06Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:34:16Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-80
|
kyleeasterly
| 2023-08-09T07:39:29Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:33:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-72
|
kyleeasterly
| 2023-08-09T07:39:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:33:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-30
|
kyleeasterly
| 2023-08-09T07:37:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:33:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-28
|
kyleeasterly
| 2023-08-09T07:37:04Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:33:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-24
|
kyleeasterly
| 2023-08-09T07:36:19Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:32:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-22
|
kyleeasterly
| 2023-08-09T07:35:29Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:31:57Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-16
|
kyleeasterly
| 2023-08-09T07:32:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:27:28Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-14
|
kyleeasterly
| 2023-08-09T07:32:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:27:24Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-10
|
kyleeasterly
| 2023-08-09T07:31:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:26:54Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-8
|
kyleeasterly
| 2023-08-09T07:30:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:26:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-6
|
kyleeasterly
| 2023-08-09T07:29:59Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:26:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
muhtasham/bert-small-finetuned-glue-rte
|
muhtasham
| 2023-08-09T07:29:50Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-27T15:51:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-small-finetuned-glue-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: rte
split: train
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.631768953068592
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-glue-rte
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8715
- Accuracy: 0.6318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 2.62 | 50 | 1.8285 | 0.6318 |
| No log | 5.26 | 100 | 2.0806 | 0.6462 |
| No log | 7.87 | 150 | 2.1598 | 0.6282 |
| No log | 10.51 | 200 | 2.2774 | 0.6318 |
| No log | 13.15 | 250 | 2.3676 | 0.6245 |
| No log | 15.77 | 300 | 2.4581 | 0.6462 |
| No log | 18.41 | 350 | 2.6175 | 0.6354 |
| No log | 21.05 | 400 | 2.6697 | 0.6354 |
| No log | 23.67 | 450 | 2.7717 | 0.6354 |
| 0.0101 | 26.31 | 500 | 2.7975 | 0.6462 |
| 0.0101 | 28.92 | 550 | 2.8532 | 0.6390 |
| 0.0101 | 31.56 | 600 | 2.9054 | 0.6209 |
| 0.0101 | 34.21 | 650 | 2.8715 | 0.6318 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-1
|
kyleeasterly
| 2023-08-09T07:26:25Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:25:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
kyleeasterly/openllama-7b_purple-aerospace-v2-300-0
|
kyleeasterly
| 2023-08-09T07:25:48Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T07:24:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
peterandrew987/results
|
peterandrew987
| 2023-08-09T07:25:13Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"base_model:indobenchmark/indobart-v2",
"base_model:finetune:indobenchmark/indobart-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-08T11:45:45Z |
---
license: mit
base_model: indobenchmark/indobart-v2
tags:
- generated_from_trainer
datasets:
- squad
metrics:
- rouge
model-index:
- name: results
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: squad
type: squad
config: plain_text
split: train[:1000]
args: plain_text
metrics:
- name: Rouge1
type: rouge
value: 16.2693
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [indobenchmark/indobart-v2](https://huggingface.co/indobenchmark/indobart-v2) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5998
- Rouge1: 16.2693
- Rouge2: 14.9952
- Rougel: 16.233
- Rougelsum: 16.2741
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| 1.4819 | 1.0 | 200 | 1.5998 | 16.2693 | 14.9952 | 16.233 | 16.2741 | 20.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
maikaarda/bge-base-en-ggml
|
maikaarda
| 2023-08-09T07:11:26Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-09T05:29:32Z |
---
license: mit
---
ggml files of [bge-base-en](https://huggingface.co/BAAI/bge-base-en)
You can use this ggml for https://github.com/skeskinen/bert.cpp
### bge-base-en
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8630 | 39.56 | 0.5533 | 69.55 |
| f16 | 0.8630 | 32.95 | 0.5533 | 55.75 |
| q4_0 | 0.8627 | 27.23 | 0.5540 | 73.29 |
| q4_1 | 0.8654 | 29.78 | 0.5508 | 69.81 |
### all-MiniLM-L12-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8306 | 13.36 | 0.4117 | 21.23 |
| f16 | 0.8306 | 11.51 | 0.4119 | 20.08 |
| q4_0 | 0.8310 | 11.27 | 0.4183 | 20.81 |
| q4_1 | 0.8325 | 12.37 | 0.4093 | 19.38 |
### all-MiniLM-L6-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8201 | 6.83 | 0.4082 | 11.34 |
| f16 | 0.8201 | 6.17 | 0.4085 | 10.28 |
| q4_0 | 0.8175 | 5.45 | 0.3911 | 10.63 |
| q4_1 | 0.8223 | 6.79 | 0.4027 | 11.41 |
### bert-base-uncased
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.4738 | 52.38 | 0.3361 | 88.56 |
| f16 | 0.4739 | 33.24 | 0.3361 | 55.86 |
| q4_0 | 0.4940 | 33.93 | 0.3375 | 57.82 |
| q4_1 | 0.4612 | 36.86 | 0.3318 | 59.63 |
|
jakezou/dqn-SpaceInvadersNoFrameskip-v4
|
jakezou
| 2023-08-09T07:11:26Z | 9 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T07:10:48Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 690.50 +/- 356.79
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jakezou -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jakezou -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jakezou
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
maikaarda/gte-base-ggml
|
maikaarda
| 2023-08-09T07:02:49Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-09T05:24:01Z |
---
license: mit
---
ggml files of [thenlper/gte-base](https://huggingface.co/thenlper/gte-base)
You can use this ggml for https://github.com/skeskinen/bert.cpp
### gte-base
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8571 | 38.98 | 0.5087 | 69.09 |
| f16 | 0.8571 | 33.06 | 0.5086 | 53.57 |
| q4_0 | 0.8580 | 25.28 | 0.5171 | 69.32 |
| q4_1 | 0.8581 | 28.12 | 0.5113 | 66.38 |
### all-MiniLM-L12-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8306 | 13.36 | 0.4117 | 21.23 |
| f16 | 0.8306 | 11.51 | 0.4119 | 20.08 |
| q4_0 | 0.8310 | 11.27 | 0.4183 | 20.81 |
| q4_1 | 0.8325 | 12.37 | 0.4093 | 19.38 |
### all-MiniLM-L6-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8201 | 6.83 | 0.4082 | 11.34 |
| f16 | 0.8201 | 6.17 | 0.4085 | 10.28 |
| q4_0 | 0.8175 | 5.45 | 0.3911 | 10.63 |
| q4_1 | 0.8223 | 6.79 | 0.4027 | 11.41 |
### bert-base-uncased
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.4738 | 52.38 | 0.3361 | 88.56 |
| f16 | 0.4739 | 33.24 | 0.3361 | 55.86 |
| q4_0 | 0.4940 | 33.93 | 0.3375 | 57.82 |
| q4_1 | 0.4612 | 36.86 | 0.3318 | 59.63 |
|
NEO946B/ppo-SnowballTarget
|
NEO946B
| 2023-08-09T06:56:18Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-09T06:55:57Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: NEO946B/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
weav-geng/llama2-qlora-finetuned-midjourney-new-v6
|
weav-geng
| 2023-08-09T06:53:53Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T06:52:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
maikaarda/gte-large-ggml
|
maikaarda
| 2023-08-09T06:52:31Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-09T05:26:15Z |
---
license: mit
---
ggml files of [thenlper/gte-large](https://huggingface.co/thenlper/gte-large)
You can use this ggml for https://github.com/skeskinen/bert.cpp
### gte-large
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8606 | 127.58 | 0.5060 | 199.61 |
| f16 | 0.8606 | 103.89 | 0.5060 | 169.68 |
| q4_0 | 0.8589 | 80.85 | 0.5037 | 157.05 |
| q4_1 | 0.8605 | 90.13 | 0.5107 | 162.59 |
### all-MiniLM-L12-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8306 | 13.36 | 0.4117 | 21.23 |
| f16 | 0.8306 | 11.51 | 0.4119 | 20.08 |
| q4_0 | 0.8310 | 11.27 | 0.4183 | 20.81 |
| q4_1 | 0.8325 | 12.37 | 0.4093 | 19.38 |
### all-MiniLM-L6-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8201 | 6.83 | 0.4082 | 11.34 |
| f16 | 0.8201 | 6.17 | 0.4085 | 10.28 |
| q4_0 | 0.8175 | 5.45 | 0.3911 | 10.63 |
| q4_1 | 0.8223 | 6.79 | 0.4027 | 11.41 |
### bert-base-uncased
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.4738 | 52.38 | 0.3361 | 88.56 |
| f16 | 0.4739 | 33.24 | 0.3361 | 55.86 |
| q4_0 | 0.4940 | 33.93 | 0.3375 | 57.82 |
| q4_1 | 0.4612 | 36.86 | 0.3318 | 59.63 |
|
whywynn/Reinforce-CartPole-v1
|
whywynn
| 2023-08-09T06:46:02Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T06:45:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
CyberHarem/yuzuriha_jigokuraku
|
CyberHarem
| 2023-08-09T06:43:55Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/yuzuriha_jigokuraku",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-09T06:40:17Z |
---
license: mit
datasets:
- CyberHarem/yuzuriha_jigokuraku
pipeline_tag: text-to-image
tags:
- art
---
# Lora of yuzuriha_jigokuraku
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/yuzuriha_jigokuraku.pt` as the embedding and `1500/yuzuriha_jigokuraku.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `yuzuriha_jigokuraku`.**
These are available steps:
| Steps | pattern_1 | bikini | free | nude | Download |
|--------:|:-----------------------------------------------|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:-----------------------------------------|
| 1500 |  |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/yuzuriha_jigokuraku.zip) |
| 1400 |  |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/yuzuriha_jigokuraku.zip) |
| 1300 |  |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/yuzuriha_jigokuraku.zip) |
| 1200 |  |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/yuzuriha_jigokuraku.zip) |
| 1100 |  |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/yuzuriha_jigokuraku.zip) |
| 1000 |  |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/yuzuriha_jigokuraku.zip) |
| 900 |  |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/yuzuriha_jigokuraku.zip) |
| 800 |  |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/yuzuriha_jigokuraku.zip) |
| 700 |  |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/yuzuriha_jigokuraku.zip) |
| 600 |  |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/yuzuriha_jigokuraku.zip) |
| 500 |  |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/yuzuriha_jigokuraku.zip) |
| 400 |  |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/yuzuriha_jigokuraku.zip) |
| 300 |  |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/yuzuriha_jigokuraku.zip) |
| 200 |  |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/yuzuriha_jigokuraku.zip) |
| 100 |  |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/yuzuriha_jigokuraku.zip) |
|
redstonehero/facebombmix_v1
|
redstonehero
| 2023-08-09T06:41:36Z | 21 | 1 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T03:57:17Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/fantexiv09beta
|
redstonehero
| 2023-08-09T06:41:21Z | 21 | 1 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T03:50:21Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/sunlightmix
|
redstonehero
| 2023-08-09T06:41:15Z | 21 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T03:49:43Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/henmixreal_v40
|
redstonehero
| 2023-08-09T06:39:58Z | 21 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T03:49:07Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/luckystrikemix_lovelyladyv105
|
redstonehero
| 2023-08-09T06:39:52Z | 21 | 0 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T03:48:58Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
redstonehero/meichidarkv4
|
redstonehero
| 2023-08-09T06:39:36Z | 26 | 1 |
diffusers
|
[
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T03:48:30Z |
---
license: creativeml-openrail-m
library_name: diffusers
---
|
ariobsessedwithai/axel
|
ariobsessedwithai
| 2023-08-09T06:38:14Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"license:unknown",
"region:us"
] | null | 2023-08-09T06:30:13Z |
---
license: unknown
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Hanwoon/llama-2-2b-miniguanaco-test
|
Hanwoon
| 2023-08-09T06:34:34Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-08-08T07:59:53Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
peterandrew987/modified-qa
|
peterandrew987
| 2023-08-09T06:34:19Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"base_model:indobenchmark/indobart-v2",
"base_model:finetune:indobenchmark/indobart-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-09T05:55:30Z |
---
license: mit
base_model: indobenchmark/indobart-v2
tags:
- generated_from_trainer
datasets:
- squad
metrics:
- rouge
model-index:
- name: modified-qa
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: squad
type: squad
config: plain_text
split: train[:1000]
args: plain_text
metrics:
- name: Rouge1
type: rouge
value: 13.4458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modified-qa
This model is a fine-tuned version of [indobenchmark/indobart-v2](https://huggingface.co/indobenchmark/indobart-v2) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9723
- Rouge1: 13.4458
- Rouge2: 6.819
- Rougel: 11.2064
- Rougelsum: 12.5476
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 4.436 | 1.0 | 200 | 3.9723 | 13.4458 | 6.819 | 11.2064 | 12.5476 | 20.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.2
- Tokenizers 0.13.3
|
maikaarda/bge-large-en-ggml
|
maikaarda
| 2023-08-09T06:30:57Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-09T05:28:59Z |
---
license: mit
---
ggml files of [bge-large-en](https://huggingface.co/BAAI/bge-large-en)
You can use this ggml for https://github.com/skeskinen/bert.cpp
### bge-large-en
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8807 | 129.10 | 0.5715 | 202.67 |
| f16 | 0.8807 | 107.80 | 0.5712 | 177.37 |
| q4_0 | 0.8798 | 81.91 | 0.5689 | 159.30 |
| q4_1 | 0.8792 | 91.66 | 0.5709 | 164.45 |
### all-MiniLM-L12-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8306 | 13.36 | 0.4117 | 21.23 |
| f16 | 0.8306 | 11.51 | 0.4119 | 20.08 |
| q4_0 | 0.8310 | 11.27 | 0.4183 | 20.81 |
| q4_1 | 0.8325 | 12.37 | 0.4093 | 19.38 |
### all-MiniLM-L6-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8201 | 6.83 | 0.4082 | 11.34 |
| f16 | 0.8201 | 6.17 | 0.4085 | 10.28 |
| q4_0 | 0.8175 | 5.45 | 0.3911 | 10.63 |
| q4_1 | 0.8223 | 6.79 | 0.4027 | 11.41 |
### bert-base-uncased
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.4738 | 52.38 | 0.3361 | 88.56 |
| f16 | 0.4739 | 33.24 | 0.3361 | 55.86 |
| q4_0 | 0.4940 | 33.93 | 0.3375 | 57.82 |
| q4_1 | 0.4612 | 36.86 | 0.3318 | 59.63 |
|
maikaarda/gte-small-ggml
|
maikaarda
| 2023-08-09T06:30:29Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2023-08-09T05:27:17Z |
---
license: mit
---
ggml files of [thenlper/gte-small](https://huggingface.co/thenlper/gte-small)
You can use this ggml for https://github.com/skeskinen/bert.cpp
### gte-small
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8554 | 12.40 | 0.4808 | 26.39 |
| f16 | 0.8555 | 11.29 | 0.4808 | 18.48 |
| q4_0 | 0.8537 | 9.22 | 0.4860 | 43.92 |
| q4_1 | 0.8543 | 10.01 | 0.4832 | 38.33 |
### all-MiniLM-L12-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8306 | 13.36 | 0.4117 | 21.23 |
| f16 | 0.8306 | 11.51 | 0.4119 | 20.08 |
| q4_0 | 0.8310 | 11.27 | 0.4183 | 20.81 |
| q4_1 | 0.8325 | 12.37 | 0.4093 | 19.38 |
### all-MiniLM-L6-v2
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.8201 | 6.83 | 0.4082 | 11.34 |
| f16 | 0.8201 | 6.17 | 0.4085 | 10.28 |
| q4_0 | 0.8175 | 5.45 | 0.3911 | 10.63 |
| q4_1 | 0.8223 | 6.79 | 0.4027 | 11.41 |
### bert-base-uncased
| Data Type | STSBenchmark | eval time | EmotionClassification | eval time |
|-----------|-----------|------------|-----------|------------|
| f32 | 0.4738 | 52.38 | 0.3361 | 88.56 |
| f16 | 0.4739 | 33.24 | 0.3361 | 55.86 |
| q4_0 | 0.4940 | 33.93 | 0.3375 | 57.82 |
| q4_1 | 0.4612 | 36.86 | 0.3318 | 59.63 |
|
CyberHarem/lupusregina_beta_overlord
|
CyberHarem
| 2023-08-09T06:18:31Z | 0 | 0 | null |
[
"art",
"text-to-image",
"dataset:CyberHarem/lupusregina_beta_overlord",
"license:mit",
"region:us"
] |
text-to-image
| 2023-08-09T06:15:01Z |
---
license: mit
datasets:
- CyberHarem/lupusregina_beta_overlord
pipeline_tag: text-to-image
tags:
- art
---
# Lora of lupusregina_beta_overlord
This model is trained with [HCP-Diffusion](https://github.com/7eu7d7/HCP-Diffusion). And the auto-training framework is maintained by [DeepGHS Team](https://huggingface.co/deepghs).
After downloading the pt and safetensors files for the specified step, you need to use them simultaneously. The pt file will be used as an embedding, while the safetensors file will be loaded for Lora.
For example, if you want to use the model from step 1500, you need to download `1500/lupusregina_beta_overlord.pt` as the embedding and `1500/lupusregina_beta_overlord.safetensors` for loading Lora. By using both files together, you can generate images for the desired characters.
**The trigger word is `lupusregina_beta_overlord`.**
These are available steps:
| Steps | bikini | free | nude | Download |
|--------:|:-----------------------------------------|:-------------------------------------|:-----------------------------------------------|:-----------------------------------------------|
| 1500 |  |  | [<NSFW, click to see>](1500/previews/nude.png) | [Download](1500/lupusregina_beta_overlord.zip) |
| 1400 |  |  | [<NSFW, click to see>](1400/previews/nude.png) | [Download](1400/lupusregina_beta_overlord.zip) |
| 1300 |  |  | [<NSFW, click to see>](1300/previews/nude.png) | [Download](1300/lupusregina_beta_overlord.zip) |
| 1200 |  |  | [<NSFW, click to see>](1200/previews/nude.png) | [Download](1200/lupusregina_beta_overlord.zip) |
| 1100 |  |  | [<NSFW, click to see>](1100/previews/nude.png) | [Download](1100/lupusregina_beta_overlord.zip) |
| 1000 |  |  | [<NSFW, click to see>](1000/previews/nude.png) | [Download](1000/lupusregina_beta_overlord.zip) |
| 900 |  |  | [<NSFW, click to see>](900/previews/nude.png) | [Download](900/lupusregina_beta_overlord.zip) |
| 800 |  |  | [<NSFW, click to see>](800/previews/nude.png) | [Download](800/lupusregina_beta_overlord.zip) |
| 700 |  |  | [<NSFW, click to see>](700/previews/nude.png) | [Download](700/lupusregina_beta_overlord.zip) |
| 600 |  |  | [<NSFW, click to see>](600/previews/nude.png) | [Download](600/lupusregina_beta_overlord.zip) |
| 500 |  |  | [<NSFW, click to see>](500/previews/nude.png) | [Download](500/lupusregina_beta_overlord.zip) |
| 400 |  |  | [<NSFW, click to see>](400/previews/nude.png) | [Download](400/lupusregina_beta_overlord.zip) |
| 300 |  |  | [<NSFW, click to see>](300/previews/nude.png) | [Download](300/lupusregina_beta_overlord.zip) |
| 200 |  |  | [<NSFW, click to see>](200/previews/nude.png) | [Download](200/lupusregina_beta_overlord.zip) |
| 100 |  |  | [<NSFW, click to see>](100/previews/nude.png) | [Download](100/lupusregina_beta_overlord.zip) |
|
luistakahashi/my-awesome-setfit-pear-4
|
luistakahashi
| 2023-08-09T06:08:59Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-09T05:57:25Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# luistakahashi/my-awesome-setfit-pear-4
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("luistakahashi/my-awesome-setfit-pear-4")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Shadman-Rohan/llama2-qlora-finetunined-french
|
Shadman-Rohan
| 2023-08-09T06:08:56Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T06:08:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
kimnt93/mt-seed-task-cls
|
kimnt93
| 2023-08-09T05:55:42Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-09T03:12:04Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# kimnt93/vi_seed_task_cls
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("kimnt93/vi_seed_task_cls")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
divyeshrajpura/speecht5-finetuned-voxpopuli-sl
|
divyeshrajpura
| 2023-08-09T05:46:03Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"sl",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-08-09T04:29:07Z |
---
language:
- sl
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5-finetuned-voxpopuli-sl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5-finetuned-voxpopuli-sl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 125
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6473 | 3.39 | 100 | 0.5703 |
| 0.5709 | 6.78 | 200 | 0.4998 |
| 0.5339 | 10.17 | 300 | 0.4802 |
| 0.5158 | 13.56 | 400 | 0.4733 |
| 0.5275 | 16.95 | 500 | 0.4691 |
| 0.4983 | 20.34 | 600 | 0.4671 |
| 0.499 | 23.73 | 700 | 0.4638 |
| 0.5003 | 27.12 | 800 | 0.4610 |
| 0.496 | 30.51 | 900 | 0.4610 |
| 0.4935 | 33.9 | 1000 | 0.4598 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
luistakahashi/my-awesome-setfit-model2
|
luistakahashi
| 2023-08-09T05:43:03Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-08T22:35:28Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# luistakahashi/my-awesome-setfit-model2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("luistakahashi/my-awesome-setfit-model2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
sanitas/sac-PandaPickAndPlace-v3
|
sanitas
| 2023-08-09T05:40:15Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T05:34:52Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -45.00 +/- 15.00
name: mean_reward
verified: false
---
# **SAC** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **SAC** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
revivoskintagremoval/revivoskintagremoval
|
revivoskintagremoval
| 2023-08-09T05:34:32Z | 0 | 0 |
diffusers
|
[
"diffusers",
"Revivo Skin Tag Remover",
"en",
"license:bsd",
"region:us"
] | null | 2023-08-09T05:33:51Z |
---
license: bsd
language:
- en
library_name: diffusers
tags:
- Revivo Skin Tag Remover
---
[Revivo Skin Tag Remover](https://atozsupplement.com/revivo-skin-tag-remover/) Clinical experts can eliminate skin tags by removing them with sterile scissors or a surgical blade. Prior to endeavoring this strategy, it's critical to counsel a medical services proficient to guarantee protected and clean circumstances.
VISIT HERE FOR OFFICIAL WEBSITE:- https://atozsupplement.com/revivo-skin-tag-remover/
|
reginaboateng/Compacter_PubmedBert_adapter_ner_pico_for_classification_task
|
reginaboateng
| 2023-08-09T04:02:05Z | 1 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:pico_ner",
"dataset:reginaboateng/cleaned_ebmnlp_pico",
"region:us"
] | null | 2023-08-09T04:02:02Z |
---
tags:
- bert
- adapter-transformers
- adapterhub:pico_ner
datasets:
- reginaboateng/cleaned_ebmnlp_pico
---
# Adapter `reginaboateng/Compacter_PubmedBert_adapter_ner_pico_for_classification_task` for microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
An [adapter](https://adapterhub.ml) for the `microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext` model that was trained on the [pico_ner](https://adapterhub.ml/explore/pico_ner/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext")
adapter_name = model.load_adapter("reginaboateng/Compacter_PubmedBert_adapter_ner_pico_for_classification_task", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
kimnt93/en-seed-task-cls
|
kimnt93
| 2023-08-09T04:01:07Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-08-09T01:28:34Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# kimnt93/en_seed_task_cls
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("kimnt93/en_seed_task_cls")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
houdi/my_awesome_model_classification_w_adapter
|
houdi
| 2023-08-09T04:00:20Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-09T03:41:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_model_classification_w_adapter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model_classification_w_adapter
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Alipay/cloudqa-chat
|
Alipay
| 2023-08-09T03:42:00Z | 0 | 1 | null |
[
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-09T03:39:27Z |
---
license: apache-2.0
language:
- en
---
|
iioSnail/bert-base-chinese-word-classifier
|
iioSnail
| 2023-08-09T03:40:04Z | 110 | 9 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"zh",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-08T08:10:00Z |
---
license: afl-3.0
language:
- zh
---
# 中文词语分类
本模型对中文词语进行分类(多标签)。对于一个中文词语,其会被分为一个或多个类别,类别有如下:
```
"1": "人文科学",
"2": "农林渔畜",
"3": "医学",
"4": "城市信息大全",
"5": "娱乐",
"6": "工程与应用科学",
"7": "生活",
"8": "电子游戏",
"9": "社会科学",
"10": "自然科学",
"11": "艺术",
"12": "运动休闲"
```
> 类别来源于[搜狗词汇的类型](https://pinyin.sogou.com/dict/cate/index/167)
# 使用样例
```python
import torch
from transformers import AutoTokenizer, BertForSequenceClassification
model_path = "iioSnail/bert-base-chinese-word-classifier"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = BertForSequenceClassification.from_pretrained(model_path)
words = ["2型糖尿病", "太古里", "跑跑卡丁车", "河豚"]
inputs = tokenizer(words, return_tensors='pt', padding=True)
outputs = model(**inputs).logits
outputs = outputs.sigmoid()
preds = outputs > 0.5
for i, pred in enumerate(preds):
pred = torch.argwhere(pred).view(-1)
labels = [model.config.id2label[int(id)] for id in pred]
print(words[i], ":", labels)
```
输出:
```
2型糖尿病 : ['医学']
太古里 : ['城市信息大全']
跑跑卡丁车 : ['电子游戏']
河豚 : ['人文科学', '娱乐', '电子游戏', '自然科学']
```
|
nayanika/test_model
|
nayanika
| 2023-08-09T03:37:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-09T03:37:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
sanitas/a2c-PandaPickAndPlace-v3
|
sanitas
| 2023-08-09T03:36:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T03:31:02Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
asenella/incomplete_mhd_MVTCAE_beta_5_scale_False_seed_1
|
asenella
| 2023-08-09T03:30:24Z | 0 | 0 | null |
[
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-08-09T03:30:14Z |
---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.